Multi-fidelity physics constrained neural networks for dynamical systems
- URL: http://arxiv.org/abs/2402.02031v1
- Date: Sat, 3 Feb 2024 05:05:26 GMT
- Title: Multi-fidelity physics constrained neural networks for dynamical systems
- Authors: Hao Zhou, Sibo Cheng, Rossella Arcucci
- Abstract summary: We propose the Multi-Scale Physics-Constrained Neural Network (MSPCNN)
MSPCNN offers a novel methodology for incorporating data with different levels of fidelity into a unified latent space.
Unlike conventional methods, MSPCNN also manages to employ multi-fidelity data to train the predictive model.
- Score: 16.6396704642848
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Physics-constrained neural networks are commonly employed to enhance
prediction robustness compared to purely data-driven models, achieved through
the inclusion of physical constraint losses during the model training process.
However, one of the major challenges of physics-constrained neural networks
consists of the training complexity especially for high-dimensional systems. In
fact, conventional physics-constrained models rely on singular-fidelity data
necessitating the assessment of physical constraints within high-dimensional
fields, which introduces computational difficulties. Furthermore, due to the
fixed input size of the neural networks, employing multi-fidelity training data
can also be cumbersome. In this paper, we propose the Multi-Scale
Physics-Constrained Neural Network (MSPCNN), which offers a novel methodology
for incorporating data with different levels of fidelity into a unified latent
space through a customised multi-fidelity autoencoder. Additionally, multiple
decoders are concurrently trained to map latent representations of inputs into
various fidelity physical spaces. As a result, during the training of
predictive models, physical constraints can be evaluated within low-fidelity
spaces, yielding a trade-off between training efficiency and accuracy. In
addition, unlike conventional methods, MSPCNN also manages to employ
multi-fidelity data to train the predictive model. We assess the performance of
MSPCNN in two fluid dynamics problems, namely a two-dimensional Burgers' system
and a shallow water system. Numerical results clearly demonstrate the
enhancement of prediction accuracy and noise robustness when introducing
physical constraints in low-fidelity fields. On the other hand, as expected,
the training complexity can be significantly reduced by computing physical
constraint loss in the low-fidelity field rather than the high-fidelity one.
Related papers
- Can physical information aid the generalization ability of Neural
Networks for hydraulic modeling? [0.0]
Application of Neural Networks to river hydraulics is fledgling, despite the field suffering from data scarcity.
We propose to mitigate such problem by introducing physical information into the training phase.
We show that incorporating such soft physical information can improve predictive capabilities.
arXiv Detail & Related papers (2024-03-13T14:51:16Z) - Physics-Informed Neural Networks with Hard Linear Equality Constraints [9.101849365688905]
This work proposes a novel physics-informed neural network, KKT-hPINN, which rigorously guarantees hard linear equality constraints.
Experiments on Aspen models of a stirred-tank reactor unit, an extractive distillation subsystem, and a chemical plant demonstrate that this model can further enhance the prediction accuracy.
arXiv Detail & Related papers (2024-02-11T17:40:26Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - On Robust Numerical Solver for ODE via Self-Attention Mechanism [82.95493796476767]
We explore training efficient and robust AI-enhanced numerical solvers with a small data size by mitigating intrinsic noise disturbances.
We first analyze the ability of the self-attention mechanism to regulate noise in supervised learning and then propose a simple-yet-effective numerical solver, Attr, which introduces an additive self-attention mechanism to the numerical solution of differential equations.
arXiv Detail & Related papers (2023-02-05T01:39:21Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Neural Galerkin Schemes with Active Learning for High-Dimensional
Evolution Equations [44.89798007370551]
This work proposes Neural Galerkin schemes based on deep learning that generate training data with active learning for numerically solving high-dimensional partial differential equations.
Neural Galerkin schemes build on the Dirac-Frenkel variational principle to train networks by minimizing the residual sequentially over time.
Our finding is that the active form of gathering training data of the proposed Neural Galerkin schemes is key for numerically realizing the expressive power of networks in high dimensions.
arXiv Detail & Related papers (2022-03-02T19:09:52Z) - Physics-informed ConvNet: Learning Physical Field from a Shallow Neural
Network [0.180476943513092]
Modelling and forecasting multi-physical systems remain a challenge due to unavoidable data scarcity and noise.
New framework named physics-informed convolutional network (PICN) is recommended from a CNN perspective.
PICN may become an alternative neural network solver in physics-informed machine learning.
arXiv Detail & Related papers (2022-01-26T14:35:58Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Training multi-objective/multi-task collocation physics-informed neural
network with student/teachers transfer learnings [0.0]
This paper presents a PINN training framework that employs pre-training steps and a net-to-net knowledge transfer algorithm.
A multi-objective optimization algorithm may improve the performance of a physical-informed neural network with competing constraints.
arXiv Detail & Related papers (2021-07-24T00:43:17Z) - Inverse-Dirichlet Weighting Enables Reliable Training of Physics
Informed Neural Networks [2.580765958706854]
We describe and remedy a failure mode that may arise from multi-scale dynamics with scale imbalances during training of deep neural networks.
PINNs are popular machine-learning templates that allow for seamless integration of physical equation models with data.
For inverse modeling using sequential training, we find that inverse-Dirichlet weighting protects a PINN against catastrophic forgetting.
arXiv Detail & Related papers (2021-07-02T10:01:37Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.