A stacked deep convolutional neural network to predict the remaining
useful life of a turbofan engine
- URL: http://arxiv.org/abs/2111.12689v1
- Date: Wed, 24 Nov 2021 18:36:28 GMT
- Title: A stacked deep convolutional neural network to predict the remaining
useful life of a turbofan engine
- Authors: David Solis-Martin, Juan Galan-Paez, Joaquin Borrego-Diaz
- Abstract summary: The solution is based on two Deep Convolutional Neural Networks stacked in two levels.
The proposed methodology was ranked in the third place of the 2021 PHM Conference Data Challenge.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents the data-driven techniques and methodologies used to
predict the remaining useful life (RUL) of a fleet of aircraft engines that can
suffer failures of diverse nature. The solution presented is based on two Deep
Convolutional Neural Networks (DCNN) stacked in two levels. The first DCNN is
used to extract a low-dimensional feature vector using the normalized raw data
as input. The second DCNN ingests a list of vectors taken from the former DCNN
and estimates the RUL. Model selection was carried out by means of Bayesian
optimization using a repeated random subsampling validation approach. The
proposed methodology was ranked in the third place of the 2021 PHM Conference
Data Challenge.
Related papers
- Efficient Higher-order Convolution for Small Kernels in Deep Learning [0.0]
We propose a novel method to perform higher-order Volterra filtering with lower memory and computational costs.
Based on the proposed method, a new attention module called Higher-order Local Attention Block (HLA) is proposed and tested.
arXiv Detail & Related papers (2024-04-25T07:42:48Z) - Inferring Data Preconditions from Deep Learning Models for Trustworthy
Prediction in Deployment [25.527665632625627]
It is important to reason about the trustworthiness of the model's predictions with unseen data during deployment.
Existing methods for specifying and verifying traditional software are insufficient for this task.
We propose a novel technique that uses rules derived from neural network computations to infer data preconditions.
arXiv Detail & Related papers (2024-01-26T03:47:18Z) - Sparsifying Bayesian neural networks with latent binary variables and
normalizing flows [10.865434331546126]
We will consider two extensions to the latent binary Bayesian neural networks (LBBNN) method.
Firstly, by using the local reparametrization trick (LRT) to sample the hidden units directly, we get a more computationally efficient algorithm.
More importantly, by using normalizing flows on the variational posterior distribution of the LBBNN parameters, the network learns a more flexible variational posterior distribution than the mean field Gaussian.
arXiv Detail & Related papers (2023-05-05T09:40:28Z) - Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid
Precoding [94.40747235081466]
We propose an end-to-end deep learning-based joint transceiver design algorithm for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems.
We develop a DNN architecture that maps the received pilots into feedback bits at the receiver, and then further maps the feedback bits into the hybrid precoder at the transmitter.
arXiv Detail & Related papers (2021-10-22T20:49:02Z) - Estimating Traffic Speeds using Probe Data: A Deep Neural Network
Approach [1.5469452301122177]
This paper presents a dedicated Deep Neural Network architecture that reconstructs space-time traffic speeds on freeways given sparse data.
A large set of empirical Floating-Car Data (FCD) collected on German freeway A9 during two months is utilized.
The results show that the DNN is able to apply learned patterns, and reconstructs moving as well as stationary congested traffic with high accuracy.
arXiv Detail & Related papers (2021-04-19T23:32:12Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Improving predictions of Bayesian neural nets via local linearization [79.21517734364093]
We argue that the Gauss-Newton approximation should be understood as a local linearization of the underlying Bayesian neural network (BNN)
Because we use this linearized model for posterior inference, we should also predict using this modified model instead of the original one.
We refer to this modified predictive as "GLM predictive" and show that it effectively resolves common underfitting problems of the Laplace approximation.
arXiv Detail & Related papers (2020-08-19T12:35:55Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.