Model-Constrained Deep Learning Approaches for Inverse Problems
- URL: http://arxiv.org/abs/2105.12033v1
- Date: Tue, 25 May 2021 16:12:39 GMT
- Title: Model-Constrained Deep Learning Approaches for Inverse Problems
- Authors: Hai V. Nguyen, Tan Bui-Thanh
- Abstract summary: Deep Learning (DL) is purely data-driven and does not require physics.
DL methods in their original forms are not capable of respecting the underlying mathematical models.
We present and provide intuitions for our formulations for general nonlinear problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning (DL), in particular deep neural networks (DNN), by design is
purely data-driven and in general does not require physics. This is the
strength of DL but also one of its key limitations when applied to science and
engineering problems in which underlying physical properties (such as
stability, conservation, and positivity) and desired accuracy need to be
achieved. DL methods in their original forms are not capable of respecting the
underlying mathematical models or achieving desired accuracy even in big-data
regimes. On the other hand, many data-driven science and engineering problems,
such as inverse problems, typically have limited experimental or observational
data, and DL would overfit the data in this case. Leveraging information
encoded in the underlying mathematical models, we argue, not only compensates
missing information in low data regimes but also provides opportunities to
equip DL methods with the underlying physics and hence obtaining higher
accuracy. This short communication introduces several model-constrained DL
approaches (including both feed-forward DNN and autoencoders) that are capable
of learning not only information hidden in the training data but also in the
underlying mathematical models to solve inverse problems. We present and
provide intuitions for our formulations for general nonlinear problems. For
linear inverse problems and linear networks, the first order optimality
conditions show that our model-constrained DL approaches can learn information
encoded in the underlying mathematical models, and thus can produce consistent
or equivalent inverse solutions, while naive purely data-based counterparts
cannot.
Related papers
- chemtrain: Learning Deep Potential Models via Automatic Differentiation and Statistical Physics [0.0]
Neural Networks (NNs) are promising models for refining the accuracy of molecular dynamics.
Chemtrain is a framework to learn sophisticated NN potential models through customizable training routines and advanced training algorithms.
arXiv Detail & Related papers (2024-08-28T15:14:58Z) - Characteristic Performance Study on Solving Oscillator ODEs via Soft-constrained Physics-informed Neural Network with Small Data [6.3295494018089435]
This paper compares physics-informed neural network (PINN), conventional neural network (NN) and traditional numerical discretization methods on solving differential equations (DEs)
We focus on the soft-constrained PINN approach and formalized its mathematical framework and computational flow for solving Ordinary DEs and Partial DEs.
We demonstrate that the DeepXDE-based implementation of PINN is not only light code and efficient in training, but also flexible across CPU/GPU platforms.
arXiv Detail & Related papers (2024-08-19T13:02:06Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Discovering Interpretable Physical Models using Symbolic Regression and
Discrete Exterior Calculus [55.2480439325792]
We propose a framework that combines Symbolic Regression (SR) and Discrete Exterior Calculus (DEC) for the automated discovery of physical models.
DEC provides building blocks for the discrete analogue of field theories, which are beyond the state-of-the-art applications of SR to physical problems.
We prove the effectiveness of our methodology by re-discovering three models of Continuum Physics from synthetic experimental data.
arXiv Detail & Related papers (2023-10-10T13:23:05Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Physics-guided Data Augmentation for Learning the Solution Operator of
Linear Differential Equations [2.1850269949775663]
We propose a physics-guided data augmentation (PGDA) method to improve the accuracy and generalization of neural operator models.
We demonstrate the advantage of PGDA on a variety of linear differential equations, showing that PGDA can improve the sample complexity and is robust to distributional shift.
arXiv Detail & Related papers (2022-12-08T06:29:15Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Surrogate-data-enriched Physics-Aware Neural Networks [0.0]
We investigate how physics-aware models can be enriched with cheaper, but inexact, data from other surrogate models like Reduced-Order Models (ROMs)
As a proof of concept, we consider the one-dimensional wave equation and show that the training accuracy is increased by two orders of magnitude when inexact data from ROMs is incorporated.
arXiv Detail & Related papers (2021-12-10T12:39:07Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - A probabilistic generative model for semi-supervised training of
coarse-grained surrogates and enforcing physical constraints through virtual
observables [3.8073142980733]
This paper provides a flexible, probabilistic framework that accounts for physical structure and information both in the training objectives and in the surrogate model itself.
We advocate a probabilistic model in which equalities that are available from the physics can be introduced as virtual observables and can provide additional information through the likelihood.
arXiv Detail & Related papers (2020-06-02T17:14:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.