Uncovering the Underlying Physics of Degrading System Behavior Through a
Deep Neural Network Framework: The Case of Remaining Useful Life Prognosis
- URL: http://arxiv.org/abs/2006.09288v1
- Date: Wed, 10 Jun 2020 21:05:59 GMT
- Title: Uncovering the Underlying Physics of Degrading System Behavior Through a
Deep Neural Network Framework: The Case of Remaining Useful Life Prognosis
- Authors: Sergio Cofre-Martel, Enrique Lopez Droguett and Mohammad Modarres
- Abstract summary: We propose an open-box approach using a deep neural network framework to explore the physics of degradation.
The framework has three stages, and it aims to discover a latent variable and corresponding PDE to represent the health state of the system.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL) has become an essential tool in prognosis and health
management (PHM), commonly used as a regression algorithm for the prognosis of
a system's behavior. One particular metric of interest is the remaining useful
life (RUL) estimated using monitoring sensor data. Most of these deep learning
applications treat the algorithms as black-box functions, giving little to no
control of the data interpretation. This becomes an issue if the models break
the governing laws of physics or other natural sciences when no constraints are
imposed. The latest research efforts have focused on applying complex DL models
to achieve a low prediction error rather than studying how the models interpret
the behavior of the data and the system itself. In this paper, we propose an
open-box approach using a deep neural network framework to explore the physics
of degradation through partial differential equations (PDEs). The framework has
three stages, and it aims to discover a latent variable and corresponding PDE
to represent the health state of the system. Models are trained as a supervised
regression and designed to output the RUL as well as a latent variable map that
can be used and interpreted as the system's health indicator.
Related papers
- PhyMPGN: Physics-encoded Message Passing Graph Network for spatiotemporal PDE systems [31.006807854698376]
We propose a new graph learning approach, namely, Physics-encoded Message Passing Graph Network (PhyMPGN)
We incorporate a GNN into a numerical integrator to approximate the temporal marching of partialtemporal dynamics for a given PDE system.
PhyMPGN is capable of accurately predicting various types of operatortemporal dynamics on coarse unstructured meshes.
arXiv Detail & Related papers (2024-10-02T08:54:18Z) - Adversarial Learning for Neural PDE Solvers with Sparse Data [4.226449585713182]
This study introduces a universal learning strategy for neural network PDEs, named Systematic Model Augmentation for Robust Training.
By focusing on challenging and improving the model's weaknesses, SMART reduces generalization error during training under data-scarce conditions.
arXiv Detail & Related papers (2024-09-04T04:18:25Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Analysis of Numerical Integration in RNN-Based Residuals for Fault
Diagnosis of Dynamic Systems [0.6999740786886536]
The paper includes a case study of a heavy-duty truck's after-treatment system to highlight the potential of these techniques for improving fault diagnosis performance.
Data-driven modeling and machine learning are widely used to model the behavior of dynamic systems.
arXiv Detail & Related papers (2023-05-08T12:48:18Z) - Autoregressive models for biomedical signal processing [0.0]
We present a framework for autoregressive modelling that incorporates uncertainties explicitly via a loss function.
Our work shows that the procedure is able to successfully denoise time series and successfully reconstruct system parameters.
This new paradigm can be used in a multitude of applications in neuroscience such as brain-computer interface data analysis and better understanding of brain dynamics in diseases such as epilepsy.
arXiv Detail & Related papers (2023-04-17T08:57:36Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - PINN Training using Biobjective Optimization: The Trade-off between Data
Loss and Residual Loss [0.0]
Physics informed neural networks (PINNs) have proven to be an efficient tool to represent problems for which measured data are available.
In this paper, we suggest a multiobjective perspective on the training of PINNs by treating the data loss and the residual loss as two individual objective functions.
arXiv Detail & Related papers (2023-02-03T15:27:50Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Integrating Expert ODEs into Neural ODEs: Pharmacology and Disease
Progression [71.7560927415706]
latent hybridisation model (LHM) integrates a system of expert-designed ODEs with machine-learned Neural ODEs to fully describe the dynamics of the system.
We evaluate LHM on synthetic data as well as real-world intensive care data of COVID-19 patients.
arXiv Detail & Related papers (2021-06-05T11:42:45Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.