Unknown Health States Recognition With Collective Decision Based Deep
Learning Networks In Predictive Maintenance Applications
- URL: http://arxiv.org/abs/2310.17670v1
- Date: Wed, 25 Oct 2023 08:24:48 GMT
- Title: Unknown Health States Recognition With Collective Decision Based Deep
Learning Networks In Predictive Maintenance Applications
- Authors: Chuyue Lou and M. Amine Atoui
- Abstract summary: This paper proposes a collective decision framework for different CNNs.
It is based on a One-vs-Rest network (OVRN) to simultaneously achieve classification of known and unknown health states.
OVRN learn state-specific discriminative features and enhance the ability to reject new abnormal samples incorporated to different CNNs.
- Score: 1.0515439489916734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: At present, decision making solutions developed based on deep learning (DL)
models have received extensive attention in predictive maintenance (PM)
applications along with the rapid improvement of computing power. Relying on
the superior properties of shared weights and spatial pooling, Convolutional
Neural Network (CNN) can learn effective representations of health states from
industrial data. Many developed CNN-based schemes, such as advanced CNNs that
introduce residual learning and multi-scale learning, have shown good
performance in health state recognition tasks under the assumption that all the
classes are known. However, these schemes have no ability to deal with new
abnormal samples that belong to state classes not part of the training set. In
this paper, a collective decision framework for different CNNs is proposed. It
is based on a One-vs-Rest network (OVRN) to simultaneously achieve
classification of known and unknown health states. OVRN learn state-specific
discriminative features and enhance the ability to reject new abnormal samples
incorporated to different CNNs. According to the validation results on the
public dataset of Tennessee Eastman Process (TEP), the proposed CNN-based
decision schemes incorporating OVRN have outstanding recognition ability for
samples of unknown heath states, while maintaining satisfactory accuracy on
known states. The results show that the new DL framework outperforms
conventional CNNs, and the one based on residual and multi-scale learning has
the best overall performance.
Related papers
- Advancing Out-of-Distribution Detection via Local Neuroplasticity [60.53625435889467]
This paper presents a novel OOD detection method that leverages the unique local neuroplasticity property of Kolmogorov-Arnold Networks (KANs)
Our method compares the activation patterns of a trained KAN against its untrained counterpart to detect OOD samples.
We validate our approach on benchmarks from image and medical domains, demonstrating superior performance and robustness compared to state-of-the-art techniques.
arXiv Detail & Related papers (2025-02-20T11:13:41Z) - BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation [20.34272550256856]
Spiking neural networks (SNNs) mimic biological neural system to convey information via discrete spikes.
Our work achieves state-of-the-art performance for training SNNs on both static and neuromorphic datasets.
arXiv Detail & Related papers (2024-07-12T08:17:24Z) - Regularizing CNNs using Confusion Penalty Based Label Smoothing for Histopathology Images [7.659984194016969]
Modern CNNs can be overconfident, making them difficult to deploy in real-world scenarios.
This paper introduces a novel LS technique based on the confusion penalty, which treats model confusion for each class with more importance than others.
We have performed extensive experiments with well-known CNN architectures with this technique on publicly available Colorectal Histology datasets.
arXiv Detail & Related papers (2024-03-16T10:25:49Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Collective Decision of One-vs-Rest Networks for Open Set Recognition [0.0]
We propose a simple open set recognition (OSR) method based on the intuition that OSR performance can be maximized by setting strict and sophisticated decision boundaries.
The proposed method performed significantly better than the state-of-the-art methods by effectively reducing overgeneralization.
arXiv Detail & Related papers (2021-03-18T13:06:46Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Multi-Sample Online Learning for Probabilistic Spiking Neural Networks [43.8805663900608]
Spiking Neural Networks (SNNs) capture some of the efficiency of biological brains for inference and learning.
This paper introduces an online learning rule based on generalized expectation-maximization (GEM)
Experimental results on structured output memorization and classification on a standard neuromorphic data set demonstrate significant improvements in terms of log-likelihood, accuracy, and calibration.
arXiv Detail & Related papers (2020-07-23T10:03:58Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - One Versus all for deep Neural Network Incertitude (OVNNI)
quantification [12.734278426543332]
We propose a new technique to quantify the epistemic uncertainty of data easily.
This method consists in mixing the predictions of an ensemble of DNNs trained to classify One class vs All the other classes (OVA) with predictions from a standard DNN trained to perform All vs All (AVA) classification.
arXiv Detail & Related papers (2020-06-01T14:06:12Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.