Adaptive Degradation Process with Deep Learning-Driven Trajectory
- URL: http://arxiv.org/abs/2103.11598v1
- Date: Mon, 22 Mar 2021 06:00:42 GMT
- Title: Adaptive Degradation Process with Deep Learning-Driven Trajectory
- Authors: Li Yang
- Abstract summary: Remaining useful life (RUL) estimation is a crucial component in the implementation of intelligent predictive maintenance and health management.
This paper develops a hybrid DNN-based prognostic approach, where a Wiener-based-degradation model is enhanced with adaptive drift to characterize the system degradation.
An LSTM-CNN encoder-decoder is developed to predict future degradation trajectories by jointly learning noise coefficients as well as drift coefficients, and adaptive drift is updated via Bayesian inference.
- Score: 5.060233857860902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Remaining useful life (RUL) estimation is a crucial component in the
implementation of intelligent predictive maintenance and health management.
Deep neural network (DNN) approaches have been proven effective in RUL
estimation due to their capacity in handling high-dimensional non-linear
degradation features. However, the applications of DNN in practice face two
challenges: (a) online update of lifetime information is often unavailable, and
(b) uncertainties in predicted values may not be analytically quantified. This
paper addresses these issues by developing a hybrid DNN-based prognostic
approach, where a Wiener-based-degradation model is enhanced with adaptive
drift to characterize the system degradation. An LSTM-CNN encoder-decoder is
developed to predict future degradation trajectories by jointly learning noise
coefficients as well as drift coefficients, and adaptive drift is updated via
Bayesian inference. A computationally efficient algorithm is proposed for the
calculation of RUL distributions. Numerical experiments are presented using
turbofan engines degradation data to demonstrate the superior accuracy of RUL
prediction of our proposed approach.
Related papers
- An Investigation on Machine Learning Predictive Accuracy Improvement and Uncertainty Reduction using VAE-based Data Augmentation [2.517043342442487]
Deep generative learning uses certain ML models to learn the underlying distribution of existing data and generate synthetic samples that resemble the real data.
In this study, our objective is to evaluate the effectiveness of data augmentation using variational autoencoder (VAE)-based deep generative models.
We investigated whether the data augmentation leads to improved accuracy in the predictions of a deep neural network (DNN) model trained using the augmented data.
arXiv Detail & Related papers (2024-10-24T18:15:48Z) - Scalable Subsampling Inference for Deep Neural Networks [0.0]
A non-asymptotic error bound has been developed to measure the performance of the fully connected DNN estimator.
A non-random subsampling technique--scalable subsampling--is applied to construct a subagged' DNN estimator.
The proposed confidence/prediction intervals appear to work well in finite samples.
arXiv Detail & Related papers (2024-05-14T02:11:38Z) - Enhancing Deep Neural Network Training Efficiency and Performance through Linear Prediction [0.0]
Deep neural networks (DNN) have achieved remarkable success in various fields, including computer vision and natural language processing.
This paper aims to propose a method to optimize the training effectiveness of DNN, with the goal of improving model performance.
arXiv Detail & Related papers (2023-10-17T03:11:30Z) - Unmatched uncertainty mitigation through neural network supported model
predictive control [7.036452261968766]
We utilize a deep neural network (DNN) as an oracle in the underlying optimization problem of learning based MPC (LBMPC)
We employ a dual-timescale adaptation mechanism, where the weights of the last layer of the neural network are updated in real time.
Results indicate that the proposed approach is implementable in real time and carries the theoretical guarantees of LBMPC.
arXiv Detail & Related papers (2023-04-22T04:49:48Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Accurate Remaining Useful Life Prediction with Uncertainty
Quantification: a Deep Learning and Nonstationary Gaussian Process Approach [0.0]
Remaining useful life (RUL) refers to the expected remaining lifespan of a component or system.
We devise a highly accurate RUL prediction model with uncertainty quantification, which integrates and leverages the advantages of deep learning and nonstationary Gaussian process regression (DL-NSGPR)
Our computational experiments show that the DL-NSGPR predictions are highly accurate with root mean square error 1.7 to 6.2 times smaller than those of competing RUL models.
arXiv Detail & Related papers (2021-09-23T18:19:58Z) - Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge
Computing: A Contextual-Bandit Approach [81.5261621619557]
We propose an adaptive anomaly detection scheme with hierarchical edge computing (HEC)
We first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer.
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network.
arXiv Detail & Related papers (2021-08-09T08:45:47Z) - Uncertainty-aware Remaining Useful Life predictor [57.74855412811814]
Remaining Useful Life (RUL) estimation is the problem of inferring how long a certain industrial asset can be expected to operate.
In this work, we consider Deep Gaussian Processes (DGPs) as possible solutions to the aforementioned limitations.
The performance of the algorithms is evaluated on the N-CMAPSS dataset from NASA for aircraft engines.
arXiv Detail & Related papers (2021-04-08T08:50:44Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Variance Reduction for Deep Q-Learning using Stochastic Recursive
Gradient [51.880464915253924]
Deep Q-learning algorithms often suffer from poor gradient estimations with an excessive variance.
This paper introduces the framework for updating the gradient estimates in deep Q-learning, achieving a novel algorithm called SRG-DQN.
arXiv Detail & Related papers (2020-07-25T00:54:20Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.