Transfer learning based multi-fidelity physics informed deep neural
network
- URL: http://arxiv.org/abs/2005.10614v2
- Date: Sun, 14 Jun 2020 05:17:32 GMT
- Title: Transfer learning based multi-fidelity physics informed deep neural
network
- Authors: Souvik Chakraborty
- Abstract summary: The governing differential equation is either not known or known in an approximate sense.
This paper presents a novel multi-fidelity physics informed deep neural network (MF-PIDNN)
MF-PIDNN blends physics informed and data-driven deep learning techniques by using the concept of transfer learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For many systems in science and engineering, the governing differential
equation is either not known or known in an approximate sense. Analyses and
design of such systems are governed by data collected from the field and/or
laboratory experiments. This challenging scenario is further worsened when
data-collection is expensive and time-consuming. To address this issue, this
paper presents a novel multi-fidelity physics informed deep neural network
(MF-PIDNN). The framework proposed is particularly suitable when the physics of
the problem is known in an approximate sense (low-fidelity physics) and only a
few high-fidelity data are available. MF-PIDNN blends physics informed and
data-driven deep learning techniques by using the concept of transfer learning.
The approximate governing equation is first used to train a low-fidelity
physics informed deep neural network. This is followed by transfer learning
where the low-fidelity model is updated by using the available high-fidelity
data. MF-PIDNN is able to encode useful information on the physics of the
problem from the {\it approximate} governing differential equation and hence,
provides accurate prediction even in zones with no data. Additionally, no
low-fidelity data is required for training this model. Applicability and
utility of MF-PIDNN are illustrated in solving four benchmark reliability
analysis problems. Case studies to illustrate interesting features of the
proposed approach are also presented.
Related papers
- Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Differentiable Multi-Fidelity Fusion: Efficient Learning of Physics
Simulations with Neural Architecture Search and Transfer Learning [1.0024450637989093]
We propose the differentiable mf (DMF) model, which leverages neural architecture search (NAS) to automatically search the suitable model architecture for different problems.
DMF can efficiently learn the physics simulations with only a few high-fidelity training samples, and outperform the state-of-the-art methods with a significant margin.
arXiv Detail & Related papers (2023-06-12T07:18:13Z) - Physics-aware deep learning framework for linear elasticity [0.0]
The paper presents an efficient and robust data-driven deep learning (DL) computational framework for linear continuum elasticity problems.
For an accurate representation of the field variables, a multi-objective loss function is proposed.
Several benchmark problems including the Airimaty solution to elasticity and the Kirchhoff-Love plate problem are solved.
arXiv Detail & Related papers (2023-02-19T20:33:32Z) - Human Trajectory Prediction via Neural Social Physics [63.62824628085961]
Trajectory prediction has been widely pursued in many fields, and many model-based and model-free methods have been explored.
We propose a new method combining both methodologies based on a new Neural Differential Equation model.
Our new model (Neural Social Physics or NSP) is a deep neural network within which we use an explicit physics model with learnable parameters.
arXiv Detail & Related papers (2022-07-21T12:11:18Z) - Scalable algorithms for physics-informed neural and graph networks [0.6882042556551611]
Physics-informed machine learning (PIML) has emerged as a promising new approach for simulating complex physical and biological systems.
In PIML, we can train such networks from additional information obtained by employing the physical laws and evaluating them at random points in the space-time domain.
We review some of the prevailing trends in embedding physics into machine learning, using physics-informed neural networks (PINNs) based primarily on feed-forward neural networks and automatic differentiation.
arXiv Detail & Related papers (2022-05-16T15:46:11Z) - Surrogate-data-enriched Physics-Aware Neural Networks [0.0]
We investigate how physics-aware models can be enriched with cheaper, but inexact, data from other surrogate models like Reduced-Order Models (ROMs)
As a proof of concept, we consider the one-dimensional wave equation and show that the training accuracy is increased by two orders of magnitude when inexact data from ROMs is incorporated.
arXiv Detail & Related papers (2021-12-10T12:39:07Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Physics-informed deep learning for incompressible laminar flows [13.084113582897965]
We propose a mixed-variable scheme of physics-informed neural network (PINN) for fluid dynamics.
A parametric study indicates that the mixed-variable scheme can improve the PINN trainability and the solution accuracy.
arXiv Detail & Related papers (2020-02-24T21:51:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.