INFINITY: Neural Field Modeling for Reynolds-Averaged Navier-Stokes
Equations
- URL: http://arxiv.org/abs/2307.13538v1
- Date: Tue, 25 Jul 2023 14:35:55 GMT
- Title: INFINITY: Neural Field Modeling for Reynolds-Averaged Navier-Stokes
Equations
- Authors: Louis Serrano, Leon Migus, Yuan Yin, Jocelyn Ahmed Mazari, Patrick
Gallinari
- Abstract summary: INFINITY is a deep learning model that encodes geometric information and physical fields into compact representations.
Our framework achieves state-of-the-art performance by accurately inferring physical fields throughout the volume and surface.
Our model can correctly predict drag and lift coefficients while adhering to the equations.
- Score: 13.242926257057084
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: For numerical design, the development of efficient and accurate surrogate
models is paramount. They allow us to approximate complex physical phenomena,
thereby reducing the computational burden of direct numerical simulations. We
propose INFINITY, a deep learning model that utilizes implicit neural
representations (INRs) to address this challenge. Our framework encodes
geometric information and physical fields into compact representations and
learns a mapping between them to infer the physical fields. We use an airfoil
design optimization problem as an example task and we evaluate our approach on
the challenging AirfRANS dataset, which closely resembles real-world industrial
use-cases. The experimental results demonstrate that our framework achieves
state-of-the-art performance by accurately inferring physical fields throughout
the volume and surface. Additionally we demonstrate its applicability in
contexts such as design exploration and shape optimization: our model can
correctly predict drag and lift coefficients while adhering to the equations.
Related papers
- Implicitly Guided Design with PropEn: Match your Data to Follow the Gradient [52.2669490431145]
PropEn is inspired by'matching', which enables implicit guidance without training a discriminator.
We show that training with a matched dataset approximates the gradient of the property of interest while remaining within the data distribution.
arXiv Detail & Related papers (2024-05-28T11:30:19Z) - From Fourier to Neural ODEs: Flow Matching for Modeling Complex Systems [20.006163951844357]
We propose a simulation-free framework for training neural ordinary differential equations (NODEs)
We employ the Fourier analysis to estimate temporal and potential high-order spatial gradients from noisy observational data.
Our approach outperforms state-of-the-art methods in terms of training time, dynamics prediction, and robustness.
arXiv Detail & Related papers (2024-05-19T13:15:23Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - Discovering Interpretable Physical Models using Symbolic Regression and
Discrete Exterior Calculus [55.2480439325792]
We propose a framework that combines Symbolic Regression (SR) and Discrete Exterior Calculus (DEC) for the automated discovery of physical models.
DEC provides building blocks for the discrete analogue of field theories, which are beyond the state-of-the-art applications of SR to physical problems.
We prove the effectiveness of our methodology by re-discovering three models of Continuum Physics from synthetic experimental data.
arXiv Detail & Related papers (2023-10-10T13:23:05Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Learning Deep Implicit Fourier Neural Operators (IFNOs) with
Applications to Heterogeneous Material Modeling [3.9181541460605116]
We propose to use data-driven modeling to predict a material's response without using conventional models.
The material response is modeled by learning the implicit mappings between loading conditions and the resultant displacement and/or damage fields.
We demonstrate the performance of our proposed method for a number of examples, including hyperelastic, anisotropic and brittle materials.
arXiv Detail & Related papers (2022-03-15T19:08:13Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - A data-driven peridynamic continuum model for upscaling molecular
dynamics [3.1196544696082613]
We propose a learning framework to extract, from molecular dynamics data, an optimal Linear Peridynamic Solid model.
We provide sufficient well-posedness conditions for discretized LPS models with sign-changing influence functions.
This framework guarantees that the resulting model is mathematically well-posed, physically consistent, and that it generalizes well to settings that are different from the ones used during training.
arXiv Detail & Related papers (2021-08-04T07:07:47Z) - Physics-Integrated Variational Autoencoders for Robust and Interpretable
Generative Modeling [86.9726984929758]
We focus on the integration of incomplete physics models into deep generative models.
We propose a VAE architecture in which a part of the latent space is grounded by physics.
We demonstrate generative performance improvements over a set of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-25T20:28:52Z) - Gradient-Based Training and Pruning of Radial Basis Function Networks
with an Application in Materials Physics [0.24792948967354234]
We propose a gradient-based technique for training radial basis function networks with an efficient and scalable open-source implementation.
We derive novel closed-form optimization criteria for pruning the models for continuous as well as binary data.
arXiv Detail & Related papers (2020-04-06T11:32:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.