An Interpretable Systematic Review of Machine Learning Models for
Predictive Maintenance of Aircraft Engine
- URL: http://arxiv.org/abs/2309.13310v1
- Date: Sat, 23 Sep 2023 08:54:10 GMT
- Title: An Interpretable Systematic Review of Machine Learning Models for
Predictive Maintenance of Aircraft Engine
- Authors: Abdullah Al Hasib, Ashikur Rahman, Mahpara Khabir and Md. Tanvir Rouf
Shawon
- Abstract summary: This paper presents an interpretable review of various machine learning and deep learning models to predict the maintenance of aircraft engine.
In this study, sensor data is utilized to predict aircraft engine failure within a predetermined number of cycles using LSTM, Bi-LSTM, RNN, Bi-RNN GRU, Random Forest, KNN, Naive Bayes, and Gradient Boosting.
A lucrative accuracy of 97.8%, 97.14%, and 96.42% are achieved by GRU, Bi-LSTM, and LSTM respectively.
- Score: 0.12289361708127873
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents an interpretable review of various machine learning and
deep learning models to predict the maintenance of aircraft engine to avoid any
kind of disaster. One of the advantages of the strategy is that it can work
with modest datasets. In this study, sensor data is utilized to predict
aircraft engine failure within a predetermined number of cycles using LSTM,
Bi-LSTM, RNN, Bi-RNN GRU, Random Forest, KNN, Naive Bayes, and Gradient
Boosting. We explain how deep learning and machine learning can be used to
generate predictions in predictive maintenance using a straightforward scenario
with just one data source. We applied lime to the models to help us understand
why machine learning models did not perform well than deep learning models. An
extensive analysis of the model's behavior is presented for several test data
to understand the black box scenario of the models. A lucrative accuracy of
97.8%, 97.14%, and 96.42% are achieved by GRU, Bi-LSTM, and LSTM respectively
which denotes the capability of the models to predict maintenance at an early
stage.
Related papers
- Addressing Bias Through Ensemble Learning and Regularized Fine-Tuning [0.2812395851874055]
This paper proposes a comprehensive approach using multiple methods to remove bias in AI models.
We train multiple models with the counter-bias of the pre-trained model through data splitting, local training, and regularized fine-tuning.
We conclude our solution with knowledge distillation that results in a single unbiased neural network.
arXiv Detail & Related papers (2024-02-01T09:24:36Z) - Evaluating and Explaining Large Language Models for Code Using Syntactic
Structures [74.93762031957883]
This paper introduces ASTxplainer, an explainability method specific to Large Language Models for code.
At its core, ASTxplainer provides an automated method for aligning token predictions with AST nodes.
We perform an empirical evaluation on 12 popular LLMs for code using a curated dataset of the most popular GitHub projects.
arXiv Detail & Related papers (2023-08-07T18:50:57Z) - EAMDrift: An interpretable self retrain model for time series [0.0]
We present EAMDrift, a novel method that combines forecasts from multiple individual predictors by weighting each prediction according to a performance metric.
EAMDrift is designed to automatically adapt to out-of-distribution patterns in data and identify the most appropriate models to use at each moment.
Our study on real-world datasets shows that EAMDrift outperforms individual baseline models by 20% and achieves comparable accuracy results to non-interpretable ensemble models.
arXiv Detail & Related papers (2023-05-31T13:25:26Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Hessian-based toolbox for reliable and interpretable machine learning in
physics [58.720142291102135]
We present a toolbox for interpretability and reliability, extrapolation of the model architecture.
It provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an agnostic score for the model predictions.
Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
arXiv Detail & Related papers (2021-08-04T16:32:59Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - In Pursuit of Interpretable, Fair and Accurate Machine Learning for
Criminal Recidivism Prediction [19.346391120556884]
This study trains interpretable models that output probabilities rather than binary predictions, and uses quantitative fairness definitions to assess the models.
We generated black-box and interpretable ML models on two different criminal recidivism datasets from Florida and Kentucky.
Several interpretable ML models can predict recidivism as well as black-box ML models and are more accurate than COMPAS or the Arnold PSA.
arXiv Detail & Related papers (2020-05-08T17:16:31Z) - Energy Predictive Models for Convolutional Neural Networks on Mobile
Platforms [0.0]
Energy use is a key concern when deploying deep learning models on mobile devices.
We build layer-type predictive models for the fully-connected and pooling layers using 12 representative Convolutional NeuralNetworks (ConvNets) on the Jetson TX1 and the Snapdragon 820.
We obtain an accuracy between 76% to 85% and a model complexity of 1 for the overall energy prediction of the test ConvNets across different hardware-software combinations.
arXiv Detail & Related papers (2020-04-10T17:35:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.