Explainability: Relevance based Dynamic Deep Learning Algorithm for
Fault Detection and Diagnosis in Chemical Processes
- URL: http://arxiv.org/abs/2103.12222v1
- Date: Mon, 22 Mar 2021 23:10:05 GMT
- Title: Explainability: Relevance based Dynamic Deep Learning Algorithm for
Fault Detection and Diagnosis in Chemical Processes
- Authors: Piyush Agarwal, Melih Tamer and Hector Budman
- Abstract summary: Two important applications of Statistical Process Control (SPC) in industrial settings are fault detection and diagnosis (FDD)
In this work a deep learning (DL) based methodology is proposed for FDD.
We investigate the application of an explainability concept to enhance the FDD accuracy of a deep neural network model trained with a data set of relatively small number of samples.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The focus of this work is on Statistical Process Control (SPC) of a
manufacturing process based on available measurements. Two important
applications of SPC in industrial settings are fault detection and diagnosis
(FDD). In this work a deep learning (DL) based methodology is proposed for FDD.
We investigate the application of an explainability concept to enhance the FDD
accuracy of a deep neural network model trained with a data set of relatively
small number of samples. The explainability is quantified by a novel relevance
measure of input variables that is calculated from a Layerwise Relevance
Propagation (LRP) algorithm. It is shown that the relevances can be used to
discard redundant input feature vectors/ variables iteratively thus resulting
in reduced over-fitting of noisy data, increasing distinguishability between
output classes and superior FDD test accuracy. The efficacy of the proposed
method is demonstrated on the benchmark Tennessee Eastman Process.
Related papers
- Fine-tuning -- a Transfer Learning approach [0.22344294014777952]
Missingness in Electronic Health Records (EHRs) is often hampered by the abundance of missing data in this valuable resource.
Existing deep imputation methods rely on end-to-end pipelines that incorporate both imputation and downstream analyses.
This paper explores the development of a modular, deep learning-based imputation and classification pipeline.
arXiv Detail & Related papers (2024-11-06T14:18:23Z) - Embedding Trajectory for Out-of-Distribution Detection in Mathematical Reasoning [50.84938730450622]
We propose a trajectory-based method TV score, which uses trajectory volatility for OOD detection in mathematical reasoning.
Our method outperforms all traditional algorithms on GLMs under mathematical reasoning scenarios.
Our method can be extended to more applications with high-density features in output spaces, such as multiple-choice questions.
arXiv Detail & Related papers (2024-05-22T22:22:25Z) - Twin Transformer using Gated Dynamic Learnable Attention mechanism for Fault Detection and Diagnosis in the Tennessee Eastman Process [0.46040036610482665]
Fault detection and diagnosis (FDD) is a crucial task for ensuring the safety and efficiency of industrial processes.
We propose a novel FDD methodology for the Tennessee Eastman Process (TEP), a widely used benchmark for chemical process control.
A novel attention mechanism, Gated Dynamic Learnable Attention (GDLAttention), is introduced which integrates a gating mechanism and dynamic learning capabilities.
arXiv Detail & Related papers (2024-03-16T07:40:23Z) - Scalable and reliable deep transfer learning for intelligent fault
detection via multi-scale neural processes embedded with knowledge [7.730457774728478]
This paper proposes a novel DTL-based deep transfer learning method known as Neural Processes-based deep transfer learning with graph convolution network (GTNP)
The validation of the proposed method is conducted across 3 IFD tasks, consistently showing the superior detection performance of GTNP compared to the other DTL-based methods.
arXiv Detail & Related papers (2024-02-20T05:39:32Z) - Querying Easily Flip-flopped Samples for Deep Active Learning [63.62397322172216]
Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.
One effective selection strategy is to base it on the model's predictive uncertainty, which can be interpreted as a measure of how informative a sample is.
This paper proposes the it least disagree metric (LDM) as the smallest probability of disagreement of the predicted label.
arXiv Detail & Related papers (2024-01-18T08:12:23Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Validation Diagnostics for SBI algorithms based on Normalizing Flows [55.41644538483948]
This work proposes easy to interpret validation diagnostics for multi-dimensional conditional (posterior) density estimators based on NF.
It also offers theoretical guarantees based on results of local consistency.
This work should help the design of better specified models or drive the development of novel SBI-algorithms.
arXiv Detail & Related papers (2022-11-17T15:48:06Z) - A New Knowledge Distillation Network for Incremental Few-Shot Surface
Defect Detection [20.712532953953808]
This paper proposes a new knowledge distillation network, called Dual Knowledge Align Network (DKAN)
The proposed DKAN method follows a pretraining-finetuning transfer learning paradigm and a knowledge distillation framework is designed for fine-tuning.
Experiments have been conducted on the incremental Few-shot NEU-DET dataset and results show that DKAN outperforms other methods on various few-shot scenes.
arXiv Detail & Related papers (2022-09-01T15:08:44Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - Probabilistic Bearing Fault Diagnosis Using Gaussian Process with
Tailored Feature Extraction [10.064000794573756]
Rolling bearings are subject to various faults due to its long-time operation under harsh environment.
Current deep learning methods perform the bearing fault diagnosis in the form of deterministic classification.
We develop a probabilistic fault diagnosis framework that can account for the uncertainty effect in prediction.
arXiv Detail & Related papers (2021-09-19T18:34:29Z) - Efficient training of lightweight neural networks using Online
Self-Acquired Knowledge Distillation [51.66271681532262]
Online Self-Acquired Knowledge Distillation (OSAKD) is proposed, aiming to improve the performance of any deep neural model in an online manner.
We utilize k-nn non-parametric density estimation technique for estimating the unknown probability distributions of the data samples in the output feature space.
arXiv Detail & Related papers (2021-08-26T14:01:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.