Generalized multiscale feature extraction for remaining useful life
prediction of bearings with generative adversarial networks
- URL: http://arxiv.org/abs/2109.12513v1
- Date: Sun, 26 Sep 2021 07:11:55 GMT
- Title: Generalized multiscale feature extraction for remaining useful life
prediction of bearings with generative adversarial networks
- Authors: Sungho Suh, Paul Lukowicz, Yong Oh Lee
- Abstract summary: Bearing is a key component in industrial machinery and its failure may lead to unwanted downtime and economic loss.
It is necessary to predict the remaining useful life (RUL) of bearings.
We propose a novel generalized multiscale feature extraction method with generative adversarial networks.
- Score: 4.988898367111902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bearing is a key component in industrial machinery and its failure may lead
to unwanted downtime and economic loss. Hence, it is necessary to predict the
remaining useful life (RUL) of bearings. Conventional data-driven approaches of
RUL prediction require expert domain knowledge for manual feature extraction
and may suffer from data distribution discrepancy between training and test
data. In this study, we propose a novel generalized multiscale feature
extraction method with generative adversarial networks. The adversarial
training learns the distribution of training data from different bearings and
is introduced for health stage division and RUL prediction. To capture the
sequence feature from a one-dimensional vibration signal, we adapt a U-Net
architecture that reconstructs features to process them with multiscale layers
in the generator of the adversarial network. To validate the proposed method,
comprehensive experiments on two rotating machinery datasets have been
conducted to predict the RUL. The experimental results show that the proposed
feature extraction method can effectively predict the RUL and outperforms the
conventional RUL prediction approaches based on deep neural networks. The
implementation code is available at https://github.com/opensuh/GMFE.
Related papers
- Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Learning from Predictions: Fusing Training and Autoregressive Inference
for Long-Term Spatiotemporal Forecasts [4.068387278512612]
We propose the Scheduled Autoregressive BPTT (BPTT-SA) algorithm for predicting complex systems.
Our results show that BPTT-SA effectively reduces iterative error propagation in Convolutional RNNs and Convolutional Autoencoder RNNs.
arXiv Detail & Related papers (2023-02-22T02:46:54Z) - Prediction-Powered Inference [68.97619568620709]
Prediction-powered inference is a framework for performing valid statistical inference when an experimental dataset is supplemented with predictions from a machine-learning system.
The framework yields simple algorithms for computing provably valid confidence intervals for quantities such as means, quantiles, and linear and logistic regression coefficients.
Prediction-powered inference could enable researchers to draw valid and more data-efficient conclusions using machine learning.
arXiv Detail & Related papers (2023-01-23T18:59:28Z) - Disentangled Representation Learning for RF Fingerprint Extraction under
Unknown Channel Statistics [77.13542705329328]
We propose a framework of disentangled representation learning(DRL) that first learns to factor the input signals into a device-relevant component and a device-irrelevant component via adversarial learning.
The implicit data augmentation in the proposed framework imposes a regularization on the RFF extractor to avoid the possible overfitting of device-irrelevant channel statistics.
Experiments validate that the proposed approach, referred to as DR-RFF, outperforms conventional methods in terms of generalizability to unknown complicated propagation environments.
arXiv Detail & Related papers (2022-08-04T15:46:48Z) - Accurate Remaining Useful Life Prediction with Uncertainty
Quantification: a Deep Learning and Nonstationary Gaussian Process Approach [0.0]
Remaining useful life (RUL) refers to the expected remaining lifespan of a component or system.
We devise a highly accurate RUL prediction model with uncertainty quantification, which integrates and leverages the advantages of deep learning and nonstationary Gaussian process regression (DL-NSGPR)
Our computational experiments show that the DL-NSGPR predictions are highly accurate with root mean square error 1.7 to 6.2 times smaller than those of competing RUL models.
arXiv Detail & Related papers (2021-09-23T18:19:58Z) - Interpretable Social Anchors for Human Trajectory Forecasting in Crowds [84.20437268671733]
We propose a neural network-based system to predict human trajectory in crowds.
We learn interpretable rule-based intents, and then utilise the expressibility of neural networks to model scene-specific residual.
Our architecture is tested on the interaction-centric benchmark TrajNet++.
arXiv Detail & Related papers (2021-05-07T09:22:34Z) - Multi-Sample Online Learning for Spiking Neural Networks based on
Generalized Expectation Maximization [42.125394498649015]
Spiking Neural Networks (SNNs) capture some of the efficiency of biological brains by processing through binary neural dynamic activations.
This paper proposes to leverage multiple compartments that sample independent spiking signals while sharing synaptic weights.
The key idea is to use these signals to obtain more accurate statistical estimates of the log-likelihood training criterion, as well as of its gradient.
arXiv Detail & Related papers (2021-02-05T16:39:42Z) - Forensicability of Deep Neural Network Inference Pipelines [68.8204255655161]
We propose methods to infer properties of the execution environment of machine learning pipelines by tracing characteristic numerical deviations in observable outputs.
Results from a series of proof-of-concept experiments give rise to possible forensic applications, such as the identification of the hardware platform used to produce deep neural network predictions.
arXiv Detail & Related papers (2021-02-01T15:41:49Z) - Cross-Validation and Uncertainty Determination for Randomized Neural
Networks with Applications to Mobile Sensors [0.0]
Extreme learning machines provide an attractive and efficient method for supervised learning under limited computing ressources and green machine learning.
Results are discussed about supervised learning with such networks and regression methods in terms of consistency and bounds for the generalization and prediction error.
arXiv Detail & Related papers (2021-01-06T12:28:06Z) - Tighter risk certificates for neural networks [10.462889461373226]
We present two training objectives, used here for the first time in connection with training neural networks.
We also re-implement a previously used training objective based on a classical PAC-Bayes bound.
We compute risk certificates for the learnt predictors, based on part of the data used to learn the predictors.
arXiv Detail & Related papers (2020-07-25T11:02:16Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.