Non-linear Independent Dual System (NIDS) for Discretization-independent
Surrogate Modeling over Complex Geometries
- URL: http://arxiv.org/abs/2109.07018v2
- Date: Thu, 16 Sep 2021 14:41:33 GMT
- Title: Non-linear Independent Dual System (NIDS) for Discretization-independent
Surrogate Modeling over Complex Geometries
- Authors: James Duvall, Karthik Duraisamy, Shaowu Pan
- Abstract summary: Non-linear independent dual system (NIDS) is a deep learning surrogate model for discretization-independent, continuous representation of PDE solutions.
NIDS can be used for prediction over domains with complex, variable geometries and mesh topologies.
Test cases include a vehicle problem with complex geometry and data scarcity, enabled by a training method.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerical solutions of partial differential equations (PDEs) require
expensive simulations, limiting their application in design optimization
routines, model-based control, or solution of large-scale inverse problems.
Existing Convolutional Neural Network-based frameworks for surrogate modeling
require lossy pixelization and data-preprocessing, which is not suitable for
realistic engineering applications. Therefore, we propose non-linear
independent dual system (NIDS), which is a deep learning surrogate model for
discretization-independent, continuous representation of PDE solutions, and can
be used for prediction over domains with complex, variable geometries and mesh
topologies. NIDS leverages implicit neural representations to develop a
non-linear mapping between problem parameters and spatial coordinates to state
predictions by combining evaluations of a case-wise parameter network and a
point-wise spatial network in a linear output layer. The input features of the
spatial network include physical coordinates augmented by a minimum distance
function evaluation to implicitly encode the problem geometry. The form of the
overall output layer induces a dual system, where each term in the map is
non-linear and independent. Further, we propose a minimum distance
function-driven weighted sum of NIDS models using a shared parameter network to
enforce boundary conditions by construction under certain restrictions. The
framework is applied to predict solutions around complex,
parametrically-defined geometries on non-parametrically-defined meshes with
solutions obtained many orders of magnitude faster than the full order models.
Test cases include a vehicle aerodynamics problem with complex geometry and
data scarcity, enabled by a training method in which more cases are gradually
added as training progresses.
Related papers
- Shape-informed surrogate models based on signed distance function domain encoding [8.052704959617207]
We propose a non-intrusive method to build surrogate models that approximate the solution of parameterized partial differential equations (PDEs)
Our approach is based on the combination of two neural networks (NNs)
arXiv Detail & Related papers (2024-09-19T01:47:04Z) - Operator Learning with Neural Fields: Tackling PDEs on General
Geometries [15.65577053925333]
Machine learning approaches for solving partial differential equations require learning mappings between function spaces.
New CORAL method leverages coordinate-based networks for PDEs on some general constraints.
arXiv Detail & Related papers (2023-06-12T17:52:39Z) - A graph convolutional autoencoder approach to model order reduction for
parametrized PDEs [0.8192907805418583]
The present work proposes a framework for nonlinear model order reduction based on a Graph Convolutional Autoencoder (GCA-ROM)
We develop a non-intrusive and data-driven nonlinear reduction approach, exploiting GNNs to encode the reduced manifold and enable fast evaluations of parametrized PDEs.
arXiv Detail & Related papers (2023-05-15T12:01:22Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Neural Implicit Flow: a mesh-agnostic dimensionality reduction paradigm
of spatio-temporal data [4.996878640124385]
We propose a general framework called Neural Implicit Flow (NIF) that enables a mesh-agnostic, low-rank representation of large-scale, parametric, spatialtemporal data.
NIF consists of two modified multilayer perceptrons (i) ShapeNet, which isolates and represents the spatial complexity (i) ShapeNet, which accounts for any other input measurements, including parametric dependencies, time, and sensor measurements.
We demonstrate the utility of NIF for parametric surrogate modeling, enabling the interpretable representation and compression of complex spatial-temporal dynamics, efficient many-spatial-temporal generalization, and improved performance for sparse
arXiv Detail & Related papers (2022-04-07T05:02:58Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Physics and Equality Constrained Artificial Neural Networks: Application
to Partial Differential Equations [1.370633147306388]
Physics-informed neural networks (PINNs) have been proposed to learn the solution of partial differential equations (PDE)
Here, we show that this specific way of formulating the objective function is the source of severe limitations in the PINN approach.
We propose a versatile framework that can tackle both inverse and forward problems.
arXiv Detail & Related papers (2021-09-30T05:55:35Z) - Reinforcement Learning for Adaptive Mesh Refinement [63.7867809197671]
We propose a novel formulation of AMR as a Markov decision process and apply deep reinforcement learning to train refinement policies directly from simulation.
The model sizes of these policy architectures are independent of the mesh size and hence scale to arbitrarily large and complex simulations.
arXiv Detail & Related papers (2021-03-01T22:55:48Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.