Self-Validated Physics-Embedding Network: A General Framework for
Inverse Modelling
- URL: http://arxiv.org/abs/2210.06071v1
- Date: Wed, 12 Oct 2022 10:31:36 GMT
- Title: Self-Validated Physics-Embedding Network: A General Framework for
Inverse Modelling
- Authors: Ruiyuan Kang, Dimitrios C. Kyritsis, Panos Liatsis
- Abstract summary: Self-Embedding Physics-Embedding Network (SVPEN) is a general neural network framework for inverse modeling.
The embedded physical forward model ensures that any solution that successfully passes its validation is physically reasonable.
More than ten case studies in two highly nonlinear and entirely distinct applications are presented.
- Score: 2.449329947677678
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Physics-based inverse modeling techniques are typically restricted to
particular research fields, whereas popular machine-learning-based ones are too
data-dependent to guarantee the physical compatibility of the solution. In this
paper, Self-Validated Physics-Embedding Network (SVPEN), a general neural
network framework for inverse modeling is proposed. As its name suggests, the
embedded physical forward model ensures that any solution that successfully
passes its validation is physically reasonable. SVPEN operates in two modes:
(a) the inverse function mode offers rapid state estimation as conventional
supervised learning, and (b) the optimization mode offers a way to iteratively
correct estimations that fail the validation process. Furthermore, the
optimization mode provides SVPEN with reconfigurability i.e., replacing
components like neural networks, physical models, and error calculations at
will to solve a series of distinct inverse problems without pretraining. More
than ten case studies in two highly nonlinear and entirely distinct
applications: molecular absorption spectroscopy and Turbofan cycle analysis,
demonstrate the generality, physical reliability, and reconfigurability of
SVPEN. More importantly, SVPEN offers a solid foundation to use existing
physical models within the context of AI, so as to striking a balance between
data-driven and physics-driven models.
Related papers
- Adapting Physics-Informed Neural Networks To Optimize ODEs in Mosquito Population Dynamics [0.019972837513980313]
We propose a PINN framework with several improvements for forward and inverse problems for ODE systems.
The framework tackles the gradient imbalance and stiff problems posed by mosquito ordinary differential equations.
Preliminary results indicate that physics-informed machine learning holds significant potential for advancing the study of ecological systems.
arXiv Detail & Related papers (2024-06-07T17:40:38Z) - Solving Inverse Problems with Model Mismatch using Untrained Neural Networks within Model-based Architectures [14.551812310439004]
We introduce an untrained forward model residual block within the model-based architecture to match the data consistency in the measurement domain for each instance.
Our approach offers a unified solution that is less parameter-sensitive, requires no additional data, and enables simultaneous fitting of the forward model and reconstruction in a single pass.
arXiv Detail & Related papers (2024-03-07T19:02:13Z) - Hybrid data-driven and physics-informed regularized learning of cyclic
plasticity with Neural Networks [0.0]
The proposed model architecture is simpler and more efficient compared to existing solutions from the literature.
The validation of the approach is carried out by means of surrogate data obtained with the Armstrong-Frederick kinematic hardening model.
arXiv Detail & Related papers (2024-03-04T07:09:54Z) - Discovering Interpretable Physical Models using Symbolic Regression and
Discrete Exterior Calculus [55.2480439325792]
We propose a framework that combines Symbolic Regression (SR) and Discrete Exterior Calculus (DEC) for the automated discovery of physical models.
DEC provides building blocks for the discrete analogue of field theories, which are beyond the state-of-the-art applications of SR to physical problems.
We prove the effectiveness of our methodology by re-discovering three models of Continuum Physics from synthetic experimental data.
arXiv Detail & Related papers (2023-10-10T13:23:05Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - MINN: Learning the dynamics of differential-algebraic equations and
application to battery modeling [3.900623554490941]
We propose a novel architecture for generating model-integrated neural networks (MINN)
MINN allows integration on the level of learning physics-based dynamics of the system.
We apply the proposed neural network architecture to model the electrochemical dynamics of lithium-ion batteries.
arXiv Detail & Related papers (2023-04-27T09:11:40Z) - Generalized Neural Closure Models with Interpretability [28.269731698116257]
We develop a novel and versatile methodology of unified neural partial delay differential equations.
We augment existing/low-fidelity dynamical models directly in their partial differential equation (PDE) forms with both Markovian and non-Markovian neural network (NN) closure parameterizations.
We demonstrate the new generalized neural closure models (gnCMs) framework using four sets of experiments based on advecting nonlinear waves, shocks, and ocean acidification models.
arXiv Detail & Related papers (2023-01-15T21:57:43Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Physics-Integrated Variational Autoencoders for Robust and Interpretable
Generative Modeling [86.9726984929758]
We focus on the integration of incomplete physics models into deep generative models.
We propose a VAE architecture in which a part of the latent space is grounded by physics.
We demonstrate generative performance improvements over a set of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-25T20:28:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.