A Hybrid Deep Learning Model-based Remaining Useful Life Estimation for
Reed Relay with Degradation Pattern Clustering
- URL: http://arxiv.org/abs/2209.06429v1
- Date: Wed, 14 Sep 2022 05:45:46 GMT
- Title: A Hybrid Deep Learning Model-based Remaining Useful Life Estimation for
Reed Relay with Degradation Pattern Clustering
- Authors: Chinthaka Gamanayake, Yan Qin, Chau Yuen, Lahiru Jayasinghe,
Dominique-Ea Tan and Jenny Low
- Abstract summary: Reed relay serves as the fundamental component of functional testing, which closely relates to the successful quality inspection of electronics.
To provide accurate remaining useful life (RUL) estimation for reed relay, a hybrid deep learning network with degradation pattern clustering is proposed.
- Score: 12.631122036403864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reed relay serves as the fundamental component of functional testing, which
closely relates to the successful quality inspection of electronics. To provide
accurate remaining useful life (RUL) estimation for reed relay, a hybrid deep
learning network with degradation pattern clustering is proposed based on the
following three considerations. First, multiple degradation behaviors are
observed for reed relay, and hence a dynamic time wrapping-based $K$-means
clustering is offered to distinguish degradation patterns from each other.
Second, although proper selections of features are of great significance, few
studies are available to guide the selection. The proposed method recommends
operational rules for easy implementation purposes. Third, a neural network for
remaining useful life estimation (RULNet) is proposed to address the weakness
of the convolutional neural network (CNN) in capturing temporal information of
sequential data, which incorporates temporal correlation ability after
high-level feature representation of convolutional operation. In this way,
three variants of RULNet are constructed with health indicators, features with
self-organizing map, or features with curve fitting. Ultimately, the proposed
hybrid model is compared with the typical baseline models, including CNN and
long short-term memory network (LSTM), through a practical reed relay dataset
with two distinct degradation manners. The results from both degradation cases
demonstrate that the proposed method outperforms CNN and LSTM regarding the
index root mean squared error.
Related papers
- Time Elastic Neural Networks [2.1756081703276]
We introduce and detail an atypical neural network architecture, called time elastic neural network (teNN)
The novelty compared to classical neural network architecture is that it explicitly incorporates time warping ability.
We demonstrate that, during the training process, the teNN succeeds in reducing the number of neurons required within each cell.
arXiv Detail & Related papers (2024-05-27T09:01:30Z) - DiTMoS: Delving into Diverse Tiny-Model Selection on Microcontrollers [34.282971510732736]
We introduce DiTMoS, a novel DNN training and inference framework with a selector-classifiers architecture.
A composition of weak models can exhibit high diversity and the union of them can significantly boost the accuracy upper bound.
We deploy DiTMoS on the Neucleo STM32F767ZI board and evaluate it based on three time-series datasets for human activity recognition, keywords spotting, and emotion recognition.
arXiv Detail & Related papers (2024-03-14T02:11:38Z) - A Library of Mirrors: Deep Neural Nets in Low Dimensions are Convex Lasso Models with Reflection Features [54.83898311047626]
We consider neural networks with piecewise linear activations ranging from 2 to an arbitrary but finite number of layers.
We first show that two-layer networks with piecewise linear activations are Lasso models using a discrete dictionary of ramp depths.
arXiv Detail & Related papers (2024-03-02T00:33:45Z) - Iterative self-transfer learning: A general methodology for response
time-history prediction based on small dataset [0.0]
An iterative self-transfer learningmethod for training neural networks based on small datasets is proposed in this study.
The results show that the proposed method can improve the model performance by near an order of magnitude on small datasets.
arXiv Detail & Related papers (2023-06-14T18:48:04Z) - JANA: Jointly Amortized Neural Approximation of Complex Bayesian Models [0.5872014229110214]
We propose jointly amortized neural approximation'' (JANA) of intractable likelihood functions and posterior densities.
We benchmark the fidelity of JANA on a variety of simulation models against state-of-the-art Bayesian methods.
arXiv Detail & Related papers (2023-02-17T20:17:21Z) - Case-Base Neural Networks: survival analysis with time-varying,
higher-order interactions [0.20482269513546458]
We propose Case-Base Neural Networks (CBNNs) as a new approach that combines the case-base sampling framework with flexible neural network architectures.
CBNNs predict the probability of an event occurring at a given moment to estimate the full hazard function.
Our results highlight the benefit of combining case-base sampling with deep learning to provide a simple and flexible framework for data-driven modeling of single event survival outcomes.
arXiv Detail & Related papers (2023-01-16T17:44:16Z) - Bayesian Neural Network Language Modeling for Speech Recognition [59.681758762712754]
State-of-the-art neural network language models (NNLMs) represented by long short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming highly complex.
In this paper, an overarching full Bayesian learning framework is proposed to account for the underlying uncertainty in LSTM-RNN and Transformer LMs.
arXiv Detail & Related papers (2022-08-28T17:50:19Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Learning representations with end-to-end models for improved remaining
useful life prognostics [64.80885001058572]
The remaining Useful Life (RUL) of equipment is defined as the duration between the current time and its failure.
We propose an end-to-end deep learning model based on multi-layer perceptron and long short-term memory layers (LSTM) to predict the RUL.
We will discuss how the proposed end-to-end model is able to achieve such good results and compare it to other deep learning and state-of-the-art methods.
arXiv Detail & Related papers (2021-04-11T16:45:18Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Understanding Self-supervised Learning with Dual Deep Networks [74.92916579635336]
We propose a novel framework to understand contrastive self-supervised learning (SSL) methods that employ dual pairs of deep ReLU networks.
We prove that in each SGD update of SimCLR with various loss functions, the weights at each layer are updated by a emphcovariance operator.
To further study what role the covariance operator plays and which features are learned in such a process, we model data generation and augmentation processes through a emphhierarchical latent tree model (HLTM)
arXiv Detail & Related papers (2020-10-01T17:51:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.