Surrogate-data-enriched Physics-Aware Neural Networks
- URL: http://arxiv.org/abs/2112.05489v1
- Date: Fri, 10 Dec 2021 12:39:07 GMT
- Title: Surrogate-data-enriched Physics-Aware Neural Networks
- Authors: Raphael Leiteritz, Patrick Buchfink, Bernard Haasdonk, Dirk Pfl\"uger
- Abstract summary: We investigate how physics-aware models can be enriched with cheaper, but inexact, data from other surrogate models like Reduced-Order Models (ROMs)
As a proof of concept, we consider the one-dimensional wave equation and show that the training accuracy is increased by two orders of magnitude when inexact data from ROMs is incorporated.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks can be used as surrogates for PDE models. They can be made
physics-aware by penalizing underlying equations or the conservation of
physical properties in the loss function during training. Current approaches
allow to additionally respect data from numerical simulations or experiments in
the training process. However, this data is frequently expensive to obtain and
thus only scarcely available for complex models. In this work, we investigate
how physics-aware models can be enriched with computationally cheaper, but
inexact, data from other surrogate models like Reduced-Order Models (ROMs). In
order to avoid trusting too-low-fidelity surrogate solutions, we develop an
approach that is sensitive to the error in inexact data. As a proof of concept,
we consider the one-dimensional wave equation and show that the training
accuracy is increased by two orders of magnitude when inexact data from ROMs is
incorporated.
Related papers
- Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Machine Learning Force Fields with Data Cost Aware Training [94.78998399180519]
Machine learning force fields (MLFF) have been proposed to accelerate molecular dynamics (MD) simulation.
Even for the most data-efficient MLFFs, reaching chemical accuracy can require hundreds of frames of force and energy labels.
We propose a multi-stage computational framework -- ASTEROID, which lowers the data cost of MLFFs by leveraging a combination of cheap inaccurate data and expensive accurate data.
arXiv Detail & Related papers (2023-06-05T04:34:54Z) - AI Model Disgorgement: Methods and Choices [127.54319351058167]
We introduce a taxonomy of possible disgorgement methods that are applicable to modern machine learning systems.
We investigate the meaning of "removing the effects" of data in the trained model in a way that does not require retraining from scratch.
arXiv Detail & Related papers (2023-04-07T08:50:18Z) - Deep Physics Corrector: A physics enhanced deep learning architecture
for solving stochastic differential equations [0.0]
We propose a novel gray-box modeling algorithm for physical systems governed by differential equations (SDE)
The proposed approach, referred to as the Deep Physics Corrector (DPC), blends approximate physics represented in terms of SDE with deep neural network (DNN)
We illustrate the performance of the proposed DPC on four benchmark examples from the literature.
arXiv Detail & Related papers (2022-09-20T14:30:07Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Physics-based Digital Twins for Autonomous Thermal Food Processing:
Efficient, Non-intrusive Reduced-order Modeling [0.0]
This paper proposes a physics-based, data-driven Digital Twin framework for autonomous food processing.
A correlation between a high standard deviation of the surface temperatures in the training data and a low root mean square error in ROM testing enables efficient selection of training data.
arXiv Detail & Related papers (2022-09-07T10:58:38Z) - Simultaneous boundary shape estimation and velocity field de-noising in
Magnetic Resonance Velocimetry using Physics-informed Neural Networks [70.7321040534471]
Magnetic resonance velocimetry (MRV) is a non-invasive technique widely used in medicine and engineering to measure the velocity field of a fluid.
Previous studies have required the shape of the boundary (for example, a blood vessel) to be known a priori.
We present a physics-informed neural network that instead uses the noisy MRV data alone to infer the most likely boundary shape and de-noised velocity field.
arXiv Detail & Related papers (2021-07-16T12:56:09Z) - Model-Constrained Deep Learning Approaches for Inverse Problems [0.0]
Deep Learning (DL) is purely data-driven and does not require physics.
DL methods in their original forms are not capable of respecting the underlying mathematical models.
We present and provide intuitions for our formulations for general nonlinear problems.
arXiv Detail & Related papers (2021-05-25T16:12:39Z) - Contrastive Model Inversion for Data-Free Knowledge Distillation [60.08025054715192]
We propose Contrastive Model Inversion, where the data diversity is explicitly modeled as an optimizable objective.
Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination.
Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI achieves significantly superior performance when the generated data are used for knowledge distillation.
arXiv Detail & Related papers (2021-05-18T15:13:00Z) - Transfer learning based multi-fidelity physics informed deep neural
network [0.0]
The governing differential equation is either not known or known in an approximate sense.
This paper presents a novel multi-fidelity physics informed deep neural network (MF-PIDNN)
MF-PIDNN blends physics informed and data-driven deep learning techniques by using the concept of transfer learning.
arXiv Detail & Related papers (2020-05-19T13:57:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.