Data Scoping: Effectively Learning the Evolution of Generic Transport PDEs
- URL: http://arxiv.org/abs/2405.01319v1
- Date: Thu, 2 May 2024 14:24:56 GMT
- Title: Data Scoping: Effectively Learning the Evolution of Generic Transport PDEs
- Authors: Jiangce Chen, Wenzhuo Xu, Zeda Xu, Noelia Grande GutiƩrrez, Sneha Prabha Narra, Christopher McComb,
- Abstract summary: Transport PDEs are governed by time-dependent partial differential equations (PDEs) describing mass, momentum, and energy conservation.
Deep learning architectures are fundamentally incompatible with the simulation of these PDEs.
This paper proposes a distributed data scoping method with linear time complexity to limit the scope of information to predict the local properties.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Transport phenomena (e.g., fluid flows) are governed by time-dependent partial differential equations (PDEs) describing mass, momentum, and energy conservation, and are ubiquitous in many engineering applications. However, deep learning architectures are fundamentally incompatible with the simulation of these PDEs. This paper clearly articulates and then solves this incompatibility. The local-dependency of generic transport PDEs implies that it only involves local information to predict the physical properties at a location in the next time step. However, the deep learning architecture will inevitably increase the scope of information to make such predictions as the number of layers increases, which can cause sluggish convergence and compromise generalizability. This paper aims to solve this problem by proposing a distributed data scoping method with linear time complexity to strictly limit the scope of information to predict the local properties. The numerical experiments over multiple physics show that our data scoping method significantly accelerates training convergence and improves the generalizability of benchmark models on large-scale engineering simulations. Specifically, over the geometries not included in the training data for heat transferring simulation, it can increase the accuracy of Convolutional Neural Networks (CNNs) by 21.7 \% and that of Fourier Neural Operators (FNOs) by 38.5 \% on average.
Related papers
- PhyMPGN: Physics-encoded Message Passing Graph Network for spatiotemporal PDE systems [31.006807854698376]
We propose a new graph learning approach, namely, Physics-encoded Message Passing Graph Network (PhyMPGN)
We incorporate a GNN into a numerical integrator to approximate the temporal marching of partialtemporal dynamics for a given PDE system.
PhyMPGN is capable of accurately predicting various types of operatortemporal dynamics on coarse unstructured meshes.
arXiv Detail & Related papers (2024-10-02T08:54:18Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Learning Generic Solutions for Multiphase Transport in Porous Media via
the Flux Functions Operator [0.0]
DeepDeepONet has emerged as a powerful tool for accelerating rendering fluxDEs.
We use Physics-In DeepONets (PI-DeepONets) to achieve this mapping without any input paired-output observations.
arXiv Detail & Related papers (2023-07-03T21:10:30Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Temporal Subsampling Diminishes Small Spatial Scales in Recurrent Neural
Network Emulators of Geophysical Turbulence [0.0]
We investigate how an often overlooked processing step affects the quality of an emulator's predictions.
We implement ML architectures from a class of methods called reservoir computing: (1) a form of spatial Vector Autoregression (N VAR), and (2) an Echo State Network (ESN)
In all cases, subsampling the training data consistently leads to an increased bias at small scales that resembles numerical diffusion.
arXiv Detail & Related papers (2023-04-28T21:34:53Z) - MAgNet: Mesh Agnostic Neural PDE Solver [68.8204255655161]
Climate predictions require fine-temporal resolutions to resolve all turbulent scales in the fluid simulations.
Current numerical model solveers PDEs on grids that are too coarse (3km to 200km on each side)
We design a novel architecture that predicts the spatially continuous solution of a PDE given a spatial position query.
arXiv Detail & Related papers (2022-10-11T14:52:20Z) - Physics-informed Convolutional Neural Networks for Temperature Field
Prediction of Heat Source Layout without Labeled Data [9.71214034180507]
This paper develops a physics-informed convolutional neural network (CNN) for the thermal simulation surrogate.
The network can learn a mapping from heat source layout to the steady-state temperature field without labeled data, which equals solving an entire family of partial difference equations (PDEs)
arXiv Detail & Related papers (2021-09-26T03:24:23Z) - A Gradient-based Deep Neural Network Model for Simulating Multiphase
Flow in Porous Media [1.5791732557395552]
We describe a gradient-based deep neural network (GDNN) constrained by the physics related to multiphase flow in porous media.
We demonstrate that GDNN can effectively predict the nonlinear patterns of subsurface responses.
arXiv Detail & Related papers (2021-04-30T02:14:00Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Combining Differentiable PDE Solvers and Graph Neural Networks for Fluid
Flow Prediction [79.81193813215872]
We develop a hybrid (graph) neural network that combines a traditional graph convolutional network with an embedded differentiable fluid dynamics simulator inside the network itself.
We show that we can both generalize well to new situations and benefit from the substantial speedup of neural network CFD predictions.
arXiv Detail & Related papers (2020-07-08T21:23:19Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.