Predictive Limitations of Physics-Informed Neural Networks in Vortex
Shedding
- URL: http://arxiv.org/abs/2306.00230v1
- Date: Wed, 31 May 2023 22:59:52 GMT
- Title: Predictive Limitations of Physics-Informed Neural Networks in Vortex
Shedding
- Authors: Pi-Yueh Chuang, Lorena A. Barba
- Abstract summary: We look at the flow around a 2D cylinder and find that data-free PINNs are unable to predict vortex shedding.
Data-driven PINN exhibits vortex shedding only while the training data is available, but reverts to the steady state solution when the data flow stops.
The distribution of the Koopman eigenvalues on the complex plane suggests that PINN is numerically dispersive and diffusive.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent surge of interest in physics-informed neural network (PINN)
methods has led to a wave of studies that attest to their potential for solving
partial differential equations (PDEs) and predicting the dynamics of physical
systems. However, the predictive limitations of PINNs have not been thoroughly
investigated. We look at the flow around a 2D cylinder and find that data-free
PINNs are unable to predict vortex shedding. Data-driven PINN exhibits vortex
shedding only while the training data (from a traditional CFD solver) is
available, but reverts to the steady state solution when the data flow stops.
We conducted dynamic mode decomposition and analyze the Koopman modes in the
solutions obtained with PINNs versus a traditional fluid solver (PetIBM). The
distribution of the Koopman eigenvalues on the complex plane suggests that PINN
is numerically dispersive and diffusive. The PINN method reverts to the steady
solution possibly as a consequence of spectral bias. This case study reaises
concerns about the ability of PINNs to predict flows with instabilities,
specifically vortex shedding. Our computational study supports the need for
more theoretical work to analyze the numerical properties of PINN methods. The
results in this paper are transparent and reproducible, with all data and code
available in public repositories and persistent archives; links are provided in
the paper repository at \url{https://github.com/barbagroup/jcs_paper_pinn}, and
a Reproducibility Statement within the paper.
Related papers
- DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - Learning solutions of parametric Navier-Stokes with physics-informed
neural networks [0.3989223013441816]
We leverageformed-Informed Neural Networks (PINs) to learn solution functions of parametric Navier-Stokes equations (NSE)
We consider the parameter(s) of interest as inputs of PINs along with coordinates, and train PINs on numerical solutions of parametric-PDES for instances of the parameters.
We show that our proposed approach results in optimizing PINN models that learn the solution functions while making sure that flow predictions are in line with conservational laws of mass and momentum.
arXiv Detail & Related papers (2024-02-05T16:19:53Z) - Exact and soft boundary conditions in Physics-Informed Neural Networks
for the Variable Coefficient Poisson equation [0.0]
Boundary conditions (BCs) are a key component in every Physics-Informed Neural Network (PINN)
BCs constrain the underlying boundary value problem (BVP) that a PINN tries to approximate.
This study examines how soft loss-based and exact distance function-based BC imposition approaches differ when applied in PINNs.
arXiv Detail & Related papers (2023-10-04T03:16:03Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - Robust Learning of Physics Informed Neural Networks [2.86989372262348]
Physics-informed Neural Networks (PINNs) have been shown to be effective in solving partial differential equations.
This paper shows that a PINN can be sensitive to errors in training data and overfit itself in dynamically propagating these errors over the domain of the solution of the PDE.
arXiv Detail & Related papers (2021-10-26T00:10:57Z) - Physics-Informed Neural Network Method for Solving One-Dimensional
Advection Equation Using PyTorch [0.0]
PINNs approach allows training neural networks while respecting the PDEs as a strong constraint in the optimization.
In standard small-scale circulation simulations, it is shown that the conventional approach incorporates a pseudo diffusive effect that is almost as large as the effect of the turbulent diffusion model.
Of all the schemes tested, only the PINNs approximation accurately predicted the outcome.
arXiv Detail & Related papers (2021-03-15T05:39:17Z) - On the eigenvector bias of Fourier feature networks: From regression to
solving multi-scale PDEs with physics-informed neural networks [0.0]
We show that neural networks (PINNs) struggle in cases where the target functions to be approximated exhibit high-frequency or multi-scale features.
We construct novel architectures that employ multi-scale random observational features and justify how such coordinate embedding layers can lead to robust and accurate PINN models.
arXiv Detail & Related papers (2020-12-18T04:19:30Z) - Probabilistic Numeric Convolutional Neural Networks [80.42120128330411]
Continuous input signals like images and time series that are irregularly sampled or have missing values are challenging for existing deep learning methods.
We propose Probabilistic Convolutional Neural Networks which represent features as Gaussian processes (GPs)
We then define a convolutional layer as the evolution of a PDE defined on this GP, followed by a nonlinearity.
In experiments we show that our approach yields a $3times$ reduction of error from the previous state of the art on the SuperPixel-MNIST dataset and competitive performance on the medical time2012 dataset PhysioNet.
arXiv Detail & Related papers (2020-10-21T10:08:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.