Uncertainty-aware deep learning for digital twin-driven monitoring:
Application to fault detection in power lines
- URL: http://arxiv.org/abs/2303.10954v1
- Date: Mon, 20 Mar 2023 09:27:58 GMT
- Title: Uncertainty-aware deep learning for digital twin-driven monitoring:
Application to fault detection in power lines
- Authors: Laya Das, Blazhe Gjorgiev, Giovanni Sansavini
- Abstract summary: Deep neural networks (DNNs) are often coupled with physics-based models or data-driven surrogate models to perform fault detection and health monitoring of systems in the low data regime.
These models can exhibit parametric uncertainty that propagates to the generated data.
In this article, we quantify the impact of both these sources of uncertainty on the performance of the DNN.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are often coupled with physics-based models or
data-driven surrogate models to perform fault detection and health monitoring
of systems in the low data regime. These models serve as digital twins to
generate large quantities of data to train DNNs which would otherwise be
difficult to obtain from the real-life system. However, such models can exhibit
parametric uncertainty that propagates to the generated data. In addition, DNNs
exhibit uncertainty in the parameters learnt during training. In such a
scenario, the performance of the DNN model will be influenced by the
uncertainty in the physics-based model as well as the parameters of the DNN. In
this article, we quantify the impact of both these sources of uncertainty on
the performance of the DNN. We perform explicit propagation of uncertainty in
input data through all layers of the DNN, as well as implicit prediction of
output uncertainty to capture the former. Furthermore, we adopt Monte Carlo
dropout to capture uncertainty in DNN parameters. We demonstrate the approach
for fault detection of power lines with a physics-based model, two types of
input data and three different neural network architectures. We compare the
performance of such uncertainty-aware probabilistic models with their
deterministic counterparts. The results show that the probabilistic models
provide important information regarding the confidence of predictions, while
also delivering an improvement in performance over deterministic models.
Related papers
- Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Correcting model misspecification in physics-informed neural networks
(PINNs) [2.07180164747172]
We present a general approach to correct the misspecified physical models in PINNs for discovering governing equations.
We employ other deep neural networks (DNNs) to model the discrepancy between the imperfect models and the observational data.
We envision that the proposed approach will extend the applications of PINNs for discovering governing equations in problems where the physico-chemical or biological processes are not well understood.
arXiv Detail & Related papers (2023-10-16T19:25:52Z) - Evaluation of machine learning architectures on the quantification of
epistemic and aleatoric uncertainties in complex dynamical systems [0.0]
Uncertainty Quantification (UQ) is a self assessed estimate of the model error.
We examine several machine learning techniques, including both Gaussian processes and a family UQ-augmented neural networks.
We evaluate UQ accuracy (distinct from model accuracy) using two metrics: the distribution of normalized residuals on validation data, and the distribution of estimated uncertainties.
arXiv Detail & Related papers (2023-06-27T02:35:25Z) - PINN Training using Biobjective Optimization: The Trade-off between Data
Loss and Residual Loss [0.0]
Physics informed neural networks (PINNs) have proven to be an efficient tool to represent problems for which measured data are available.
In this paper, we suggest a multiobjective perspective on the training of PINNs by treating the data loss and the residual loss as two individual objective functions.
arXiv Detail & Related papers (2023-02-03T15:27:50Z) - A critical look at deep neural network for dynamic system modeling [0.0]
This paper questions the capability of (deep) neural networks for the modeling of dynamic systems using input-output data.
For the identification of linear time-invariant (LTI) dynamic systems, two representative neural network models are compared.
For the LTI system, both LSTM and CFNN fail to deliver consistent models even in noise-free cases.
arXiv Detail & Related papers (2023-01-27T09:03:05Z) - Fault-Aware Design and Training to Enhance DNNs Reliability with
Zero-Overhead [67.87678914831477]
Deep Neural Networks (DNNs) enable a wide series of technological advancements.
Recent findings indicate that transient hardware faults may corrupt the models prediction dramatically.
In this work, we propose to tackle the reliability issue both at training and model design time.
arXiv Detail & Related papers (2022-05-28T13:09:30Z) - On the Prediction Instability of Graph Neural Networks [2.3605348648054463]
Instability of trained models can affect reliability, reliability, and trust in machine learning systems.
We systematically assess the prediction instability of node classification with state-of-the-art Graph Neural Networks (GNNs)
We find that up to one third of the incorrectly classified nodes differ across algorithm runs.
arXiv Detail & Related papers (2022-05-20T10:32:59Z) - Enhanced physics-constrained deep neural networks for modeling vanadium
redox flow battery [62.997667081978825]
We propose an enhanced version of the physics-constrained deep neural network (PCDNN) approach to provide high-accuracy voltage predictions.
The ePCDNN can accurately capture the voltage response throughout the charge--discharge cycle, including the tail region of the voltage discharge curve.
arXiv Detail & Related papers (2022-03-03T19:56:24Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Stochastic Graph Neural Networks [123.39024384275054]
Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning.
Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly.
arXiv Detail & Related papers (2020-06-04T08:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.