A Digital Twin for Diesel Engines: Operator-infused Physics-Informed Neural Networks with Transfer Learning for Engine Health Monitoring
- URL: http://arxiv.org/abs/2412.11967v2
- Date: Fri, 10 Oct 2025 16:54:09 GMT
- Title: A Digital Twin for Diesel Engines: Operator-infused Physics-Informed Neural Networks with Transfer Learning for Engine Health Monitoring
- Authors: Kamaljyoti Nath, Varun Kumar, Daniel J. Smith, George Em Karniadakis,
- Abstract summary: We propose a novel hybrid framework that combines physics-informed neural networks (PINNs) with deep operator networks (DeepONet)<n>Our method leverages physics-based system knowledge in combination with data-driven training of neural networks to enhance model applicability.<n>Our framework combines the interpretability of physics-based models with the flexibility of deep learning, offering substantial gains in generalization, accuracy, and deployment efficiency for diesel engine diagnostics.
- Score: 8.170475210242463
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Improving diesel engine efficiency, reducing emissions, and enabling robust health monitoring have been critical research topics in engine modelling. While recent advancements in the use of neural networks for system monitoring have shown promising results, such methods often focus on component-level analysis, lack generalizability, and physical interpretability. In this study, we propose a novel hybrid framework that combines physics-informed neural networks (PINNs) with deep operator networks (DeepONet) to enable accurate and computationally efficient parameter identification in mean-value diesel engine models. Our method leverages physics-based system knowledge in combination with data-driven training of neural networks to enhance model applicability. Incorporating offline-trained DeepONets to predict actuator dynamics significantly lowers the online computation cost when compared to the existing PINN framework. To address the re-training burden typical of PINNs under varying input conditions, we propose two transfer learning (TL) strategies: (i) a multi-stage TL scheme offering better runtime efficiency than full online training of the PINN model and (ii) a few-shot TL scheme that freezes a shared multi-head network body and computes physics-based derivatives required for model training outside the training loop. The second strategy offers a computationally inexpensive and physics-based approach for predicting engine dynamics and parameter identification, offering computational efficiency over the existing PINN framework. Compared to existing health monitoring methods, our framework combines the interpretability of physics-based models with the flexibility of deep learning, offering substantial gains in generalization, accuracy, and deployment efficiency for diesel engine diagnostics.
Related papers
- Limitations of Physics-Informed Neural Networks: a Study on Smart Grid Surrogation [29.49941497527361]
PINNs present a transformative approach for smart grid modeling by integrating physical laws directly into learning frameworks.<n>This paper evaluates PINNs' capabilities as surrogate models for smart grid dynamics.<n>We demonstrate PINNs' superior generalization, outperforming data-driven models in error reduction.
arXiv Detail & Related papers (2025-08-29T12:15:32Z) - Modelling of Underwater Vehicles using Physics-Informed Neural Networks with Control [1.9343033692333778]
Physics-informed neural networks (PINNs) integrate physical laws with data-driven models to improve generalization and sample efficiency.<n>This work introduces an open-source implementation of the Physics-Informed Neural Network with Control framework, designed to model the dynamics of an underwater vehicle.
arXiv Detail & Related papers (2025-04-28T17:38:57Z) - Brain-Inspired Online Adaptation for Remote Sensing with Spiking Neural Network [17.315710646752176]
This work proposes an online adaptation framework based on spiking neural networks (SNNs) for remote sensing.
To our knowledge, this work is the first to address the online adaptation of SNNs.
The proposed method enables energy-efficient and fast online adaptation on edge devices, and has much potential in applications such as remote perception on on-orbit satellites and UAV.
arXiv Detail & Related papers (2024-09-03T08:47:53Z) - DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Mobile Traffic Prediction at the Edge through Distributed and Transfer
Learning [2.687861184973893]
The research in this topic concentrated on making predictions in a centralized fashion, by collecting data from the different network elements.
We propose a novel prediction framework based on edge computing which uses datasets obtained on the edge through a large measurement campaign.
arXiv Detail & Related papers (2023-10-22T23:48:13Z) - Physics-informed neural networks for predicting gas flow dynamics and
unknown parameters in diesel engines [0.0]
The aim is to evaluate the engine dynamics, identify unknown parameters in a "mean value" model, and anticipate maintenance requirements.
The PINN model is applied to diesel engines with a variable-geometry turbocharger and exhaust gas recirculation.
The study considers the use of deep neural networks (DNNs) in addition to the PINN model.
arXiv Detail & Related papers (2023-04-26T19:37:18Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Topics in Deep Learning and Optimization Algorithms for IoT Applications
in Smart Transportation [0.0]
This thesis investigates how different optimization algorithms and machine learning techniques can be leveraged to improve system performance.
In the first topic, we propose an optimal transmission frequency management scheme using decentralized ADMM-based method.
In the second topic, we leverage graph neural network (GNN) for demand prediction for shared bikes.
In the last topic, we consider a highway traffic network scenario where frequent lane changing behaviors may occur with probability.
arXiv Detail & Related papers (2022-10-13T11:45:30Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - On feedforward control using physics-guided neural networks: Training
cost regularization and optimized initialization [0.0]
Performance of model-based feedforward controllers is typically limited by the accuracy of the inverse system dynamics model.
This paper proposes a regularization method via identified physical parameters.
It is validated on a real-life industrial linear motor, where it delivers better tracking accuracy and extrapolation.
arXiv Detail & Related papers (2022-01-28T12:51:25Z) - Recursive Least-Squares Estimator-Aided Online Learning for Visual
Tracking [58.14267480293575]
We propose a simple yet effective online learning approach for few-shot online adaptation without requiring offline training.
It allows an in-built memory retention mechanism for the model to remember the knowledge about the object seen before.
We evaluate our approach based on two networks in the online learning families for tracking, i.e., multi-layer perceptrons in RT-MDNet and convolutional neural networks in DiMP.
arXiv Detail & Related papers (2021-12-28T06:51:18Z) - DAE-PINN: A Physics-Informed Neural Network Model for Simulating
Differential-Algebraic Equations with Application to Power Networks [8.66798555194688]
We develop DAE-PINN, the first effective deep-learning framework for learning and simulating the solution trajectories of nonlinear differential-algebraic equations.
Our framework enforces the neural network to satisfy the DAEs as (approximate) hard constraints using a penalty-based method.
We showcase the effectiveness and accuracy of DAE-PINN by learning and simulating the solution trajectories of a three-bus power network.
arXiv Detail & Related papers (2021-09-09T14:30:28Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - The training accuracy of two-layer neural networks: its estimation and
understanding using random datasets [0.0]
We propose a novel theory based on space partitioning to estimate the approximate training accuracy for two-layer neural networks on random datasets without training.
Our method estimates the training accuracy for two-layer fully-connected neural networks on two-class random datasets using only three arguments.
arXiv Detail & Related papers (2020-10-26T07:21:29Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Indirect and Direct Training of Spiking Neural Networks for End-to-End
Control of a Lane-Keeping Vehicle [12.137685936113384]
Building spiking neural networks (SNNs) based on biological synaptic plasticities holds a promising potential for accomplishing fast and energy-efficient computing.
In this paper, we introduce both indirect and direct end-to-end training methods of SNNs for a lane-keeping vehicle.
arXiv Detail & Related papers (2020-03-10T09:35:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.