Deep Physics Corrector: A physics enhanced deep learning architecture
for solving stochastic differential equations
- URL: http://arxiv.org/abs/2209.09750v1
- Date: Tue, 20 Sep 2022 14:30:07 GMT
- Title: Deep Physics Corrector: A physics enhanced deep learning architecture
for solving stochastic differential equations
- Authors: Tushar and Souvik Chakraborty
- Abstract summary: We propose a novel gray-box modeling algorithm for physical systems governed by differential equations (SDE)
The proposed approach, referred to as the Deep Physics Corrector (DPC), blends approximate physics represented in terms of SDE with deep neural network (DNN)
We illustrate the performance of the proposed DPC on four benchmark examples from the literature.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a novel gray-box modeling algorithm for physical systems governed
by stochastic differential equations (SDE). The proposed approach, referred to
as the Deep Physics Corrector (DPC), blends approximate physics represented in
terms of SDE with deep neural network (DNN). The primary idea here is to
exploit DNN to model the missing physics. We hypothesize that combining
incomplete physics with data will make the model interpretable and allow better
generalization. The primary bottleneck associated with training surrogate
models for stochastic simulators is often associated with selecting the
suitable loss function. Among the different loss functions available in the
literature, we use the conditional maximum mean discrepancy (CMMD) loss
function in DPC because of its proven performance. Overall, physics-data fusion
and CMMD allow DPC to learn from sparse data. We illustrate the performance of
the proposed DPC on four benchmark examples from the literature. The results
obtained are highly accurate, indicating its possible application as a
surrogate model for stochastic simulators.
Related papers
- Metamizer: a versatile neural optimizer for fast and accurate physics simulations [4.717325308876749]
We introduce Metamizer, a novel neural network that iteratively solves a wide range of physical systems with high accuracy.
We demonstrate that Metamizer achieves unprecedented accuracy for deep learning based approaches.
Our results suggest that Metamizer could have a profound impact on future numerical solvers.
arXiv Detail & Related papers (2024-10-10T11:54:31Z) - Text2PDE: Latent Diffusion Models for Accessible Physics Simulation [7.16525545814044]
We introduce several methods to apply latent diffusion models to physics simulation.
We show that the proposed approach is competitive with current neural PDE solvers in both accuracy and efficiency.
By introducing a scalable, accurate, and usable physics simulator, we hope to bring neural PDE solvers closer to practical use.
arXiv Detail & Related papers (2024-10-02T01:09:47Z) - Physics-Informed Machine Learning of Argon Gas-Driven Melt Pool Dynamics [0.0]
Melt pool dynamics in metal additive manufacturing (AM) is critical to process stability, microstructure formation, and final properties of the printed materials.
This paper provides a physics-informed machine learning (PIML) method by integrating neural networks with the governing physical laws to predict the melt pool dynamics.
The data-efficient PINN model is attributed to the soft penalty by incorporating governing partial differential equations (PDEs), initial conditions, and boundary conditions in the PINN model.
arXiv Detail & Related papers (2023-07-23T12:12:44Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Human Trajectory Prediction via Neural Social Physics [63.62824628085961]
Trajectory prediction has been widely pursued in many fields, and many model-based and model-free methods have been explored.
We propose a new method combining both methodologies based on a new Neural Differential Equation model.
Our new model (Neural Social Physics or NSP) is a deep neural network within which we use an explicit physics model with learnable parameters.
arXiv Detail & Related papers (2022-07-21T12:11:18Z) - Learning to Solve PDE-constrained Inverse Problems with Graph Networks [51.89325993156204]
In many application domains across science and engineering, we are interested in solving inverse problems with constraints defined by a partial differential equation (PDE)
Here we explore GNNs to solve such PDE-constrained inverse problems.
We demonstrate computational speedups of up to 90x using GNNs compared to principled solvers.
arXiv Detail & Related papers (2022-06-01T18:48:01Z) - Physics Informed RNN-DCT Networks for Time-Dependent Partial
Differential Equations [62.81701992551728]
We present a physics-informed framework for solving time-dependent partial differential equations.
Our model utilizes discrete cosine transforms to encode spatial and recurrent neural networks.
We show experimental results on the Taylor-Green vortex solution to the Navier-Stokes equations.
arXiv Detail & Related papers (2022-02-24T20:46:52Z) - Surrogate-data-enriched Physics-Aware Neural Networks [0.0]
We investigate how physics-aware models can be enriched with cheaper, but inexact, data from other surrogate models like Reduced-Order Models (ROMs)
As a proof of concept, we consider the one-dimensional wave equation and show that the training accuracy is increased by two orders of magnitude when inexact data from ROMs is incorporated.
arXiv Detail & Related papers (2021-12-10T12:39:07Z) - AdjointNet: Constraining machine learning models with physics-based
codes [0.17205106391379021]
This paper proposes a physics constrained machine learning framework, AdjointNet, allowing domain scientists to embed their physics code in neural network training.
We show that the proposed AdjointNet framework can be used for parameter estimation (and uncertainty quantification by extension) and experimental design using active learning.
arXiv Detail & Related papers (2021-09-08T22:43:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.