Physics Encoded Blocks in Residual Neural Network Architectures for Digital Twin Models
- URL: http://arxiv.org/abs/2411.11497v1
- Date: Mon, 18 Nov 2024 11:58:20 GMT
- Title: Physics Encoded Blocks in Residual Neural Network Architectures for Digital Twin Models
- Authors: Muhammad Saad Zia, Ashiq Anjum, Lu Liu, Anthony Conway, Anasol Pena Rios,
- Abstract summary: This paper presents a generic approach based on a novel physics-encoded residual neural network architecture.
Our method combines physics blocks as mathematical operators from physics-based models with learning blocks comprising feed-forward layers.
Compared to conventional neural network-based methods, our method improves generalizability with substantially low data requirements.
- Score: 2.8720819157502344
- License:
- Abstract: Physics Informed Machine Learning has emerged as a popular approach in modelling and simulation for digital twins to generate accurate models of processes and behaviours of real-world systems. However, despite their success in generating accurate and reliable models, the existing methods either use simple regularizations in loss functions to offer limited physics integration or are too specific in architectural definitions to be generalized to a wide variety of physical systems. This paper presents a generic approach based on a novel physics-encoded residual neural network architecture to combine data-driven and physics-based analytical models to address these limitations. Our method combines physics blocks as mathematical operators from physics-based models with learning blocks comprising feed-forward layers. Intermediate residual blocks are incorporated for stable gradient flow as they train on physical system observation data. This way, the model learns to comply with the geometric and kinematic aspects of the physical system. Compared to conventional neural network-based methods, our method improves generalizability with substantially low data requirements and model complexity in terms of parameters, especially in scenarios where prior physics knowledge is either elementary or incomplete. We investigate our approach in two application domains. The first is a basic robotic motion model using Euler Lagrangian equations of motion as physics prior. The second application is a complex scenario of a steering model for a self-driving vehicle in a simulation. In both applications, our method outperforms both conventional neural network based approaches as-well as state-of-the-art Physics Informed Machine Learning methods.
Related papers
- Transport-Embedded Neural Architecture: Redefining the Landscape of physics aware neural models in fluid mechanics [0.0]
A physical problem, the Taylor-Green vortex, defined on a bi-periodic domain, is used as a benchmark to evaluate the performance of both the standard physics-informed neural network and our model.
Results exhibit that while the standard physics-informed neural network fails to predict the solution accurately and merely returns the initial condition for the entire time span, our model successfully captures the temporal changes in the physics.
arXiv Detail & Related papers (2024-10-05T10:32:51Z) - MINN: Learning the dynamics of differential-algebraic equations and
application to battery modeling [3.900623554490941]
We propose a novel architecture for generating model-integrated neural networks (MINN)
MINN allows integration on the level of learning physics-based dynamics of the system.
We apply the proposed neural network architecture to model the electrochemical dynamics of lithium-ion batteries.
arXiv Detail & Related papers (2023-04-27T09:11:40Z) - Human Trajectory Prediction via Neural Social Physics [63.62824628085961]
Trajectory prediction has been widely pursued in many fields, and many model-based and model-free methods have been explored.
We propose a new method combining both methodologies based on a new Neural Differential Equation model.
Our new model (Neural Social Physics or NSP) is a deep neural network within which we use an explicit physics model with learnable parameters.
arXiv Detail & Related papers (2022-07-21T12:11:18Z) - Multi-Objective Physics-Guided Recurrent Neural Networks for Identifying
Non-Autonomous Dynamical Systems [0.0]
We propose a physics-guided hybrid approach for modeling non-autonomous systems under control.
This is extended by a recurrent neural network and trained using a sophisticated multi-objective strategy.
Experiments conducted on real data reveal substantial accuracy improvements by our approach compared to a physics-based model.
arXiv Detail & Related papers (2022-04-27T14:33:02Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - DeepPhysics: a physics aware deep learning framework for real-time
simulation [0.0]
We propose a solution to simulate hyper-elastic materials using a data-driven approach.
A neural network is trained to learn the non-linear relationship between boundary conditions and the resulting displacement field.
The results show that our network architecture trained with a limited amount of data can predict the displacement field in less than a millisecond.
arXiv Detail & Related papers (2021-09-17T12:15:47Z) - Physics-Integrated Variational Autoencoders for Robust and Interpretable
Generative Modeling [86.9726984929758]
We focus on the integration of incomplete physics models into deep generative models.
We propose a VAE architecture in which a part of the latent space is grounded by physics.
We demonstrate generative performance improvements over a set of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-25T20:28:52Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Data-Efficient Learning for Complex and Real-Time Physical Problem
Solving using Augmented Simulation [49.631034790080406]
We present a task for navigating a marble to the center of a circular maze.
We present a model that learns to move a marble in the complex environment within minutes of interacting with the real system.
arXiv Detail & Related papers (2020-11-14T02:03:08Z) - Modeling System Dynamics with Physics-Informed Neural Networks Based on
Lagrangian Mechanics [3.214927790437842]
Two main modeling approaches often fail to meet requirements: first principles methods suffer from high bias, whereas data-driven modeling tends to have high variance.
We present physics-informed neural ordinary differential equations (PINODE), a hybrid model that combines the two modeling techniques to overcome the aforementioned problems.
Our findings are of interest for model-based control and system identification of mechanical systems.
arXiv Detail & Related papers (2020-05-29T15:10:43Z) - Learning to Simulate Complex Physics with Graph Networks [68.43901833812448]
We present a machine learning framework and model implementation that can learn to simulate a wide variety of challenging physical domains.
Our framework---which we term "Graph Network-based Simulators" (GNS)--represents the state of a physical system with particles, expressed as nodes in a graph, and computes dynamics via learned message-passing.
Our results show that our model can generalize from single-timestep predictions with thousands of particles during training, to different initial conditions, thousands of timesteps, and at least an order of magnitude more particles at test time.
arXiv Detail & Related papers (2020-02-21T16:44:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.