A two-level machine learning framework for predictive maintenance:
comparison of learning formulations
- URL: http://arxiv.org/abs/2204.10083v1
- Date: Thu, 21 Apr 2022 13:24:28 GMT
- Title: A two-level machine learning framework for predictive maintenance:
comparison of learning formulations
- Authors: Valentin Hamaide, Denis Joassin, Lauriane Castin, Fran\c{c}ois Glineur
- Abstract summary: This paper aims to design and compare different formulations for predictive maintenance in a two-level framework.
The first level is responsible for building a health indicator by aggregating features using a learning algorithm.
The second level consists of a decision-making system that can trigger an alarm based on this health indicator.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predicting incoming failures and scheduling maintenance based on sensors
information in industrial machines is increasingly important to avoid downtime
and machine failure. Different machine learning formulations can be used to
solve the predictive maintenance problem. However, many of the approaches
studied in the literature are not directly applicable to real-life scenarios.
Indeed, many of those approaches usually either rely on labelled machine
malfunctions in the case of classification and fault detection, or rely on
finding a monotonic health indicator on which a prediction can be made in the
case of regression and remaining useful life estimation, which is not always
feasible. Moreover, the decision-making part of the problem is not always
studied in conjunction with the prediction phase. This paper aims to design and
compare different formulations for predictive maintenance in a two-level
framework and design metrics that quantify both the failure detection
performance as well as the timing of the maintenance decision. The first level
is responsible for building a health indicator by aggregating features using a
learning algorithm. The second level consists of a decision-making system that
can trigger an alarm based on this health indicator. Three degrees of
refinements are compared in the first level of the framework, from simple
threshold-based univariate predictive technique to supervised learning methods
based on the remaining time before failure. We choose to use the Support Vector
Machine (SVM) and its variations as the common algorithm used in all the
formulations. We apply and compare the different strategies on a real-world
rotating machine case study and observe that while a simple model can already
perform well, more sophisticated refinements enhance the predictions for
well-chosen parameters.
Related papers
- Machine Learning for predicting chaotic systems [0.0]
We show that well-tuned simple methods, as well as untuned baseline methods, often outperform state-of-the-art deep learning models.
These findings underscore the importance of matching prediction methods to data characteristics and available computational resources.
arXiv Detail & Related papers (2024-07-29T16:34:47Z) - Comprehensive Study Of Predictive Maintenance In Industries Using Classification Models And LSTM Model [0.0]
The study aims to delve into various machine learning classification techniques, including Support Vector Machine (SVM), Random Forest, Logistic Regression, and Convolutional Neural Network LSTM-Based, for predicting and analyzing machine performance.
The primary objective of the study is to assess these algorithms' performance in predicting and analyzing machine performance, considering factors such as accuracy, precision, recall, and F1 score.
arXiv Detail & Related papers (2024-03-15T12:47:45Z) - Learning-Based Approaches to Predictive Monitoring with Conformal
Statistical Guarantees [2.1684857243537334]
This tutorial focuses on efficient methods to predictive monitoring (PM)
PM is the problem of detecting future violations of a given requirement from the current state of a system.
We present a general and comprehensive framework summarizing our approach to the predictive monitoring of CPSs.
arXiv Detail & Related papers (2023-12-04T15:16:42Z) - Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes [72.13373216644021]
We study the societal impact of machine learning by considering the collection of models that are deployed in a given context.
We find deployed machine learning is prone to systemic failure, meaning some users are exclusively misclassified by all models available.
These examples demonstrate ecosystem-level analysis has unique strengths for characterizing the societal impact of machine learning.
arXiv Detail & Related papers (2023-07-12T01:11:52Z) - Evaluating Machine Unlearning via Epistemic Uncertainty [78.27542864367821]
This work presents an evaluation of Machine Unlearning algorithms based on uncertainty.
This is the first definition of a general evaluation of our best knowledge.
arXiv Detail & Related papers (2022-08-23T09:37:31Z) - Prediction of Dilatory Behavior in eLearning: A Comparison of Multiple
Machine Learning Models [0.2963240482383777]
Procrastination, the irrational delay of tasks, is a common occurrence in online learning.
Research focusing on such predictions is scarce.
Studies involving different types of predictors and comparisons between the predictive performance of various methods are virtually non-existent.
arXiv Detail & Related papers (2022-06-30T07:24:08Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - Learning Predictions for Algorithms with Predictions [49.341241064279714]
We introduce a general design approach for algorithms that learn predictors.
We apply techniques from online learning to learn against adversarial instances, tune robustness-consistency trade-offs, and obtain new statistical guarantees.
We demonstrate the effectiveness of our approach at deriving learning algorithms by analyzing methods for bipartite matching, page migration, ski-rental, and job scheduling.
arXiv Detail & Related papers (2022-02-18T17:25:43Z) - Learning from Heterogeneous Data Based on Social Interactions over
Graphs [58.34060409467834]
This work proposes a decentralized architecture, where individual agents aim at solving a classification problem while observing streaming features of different dimensions.
We show that the.
strategy enables the agents to learn consistently under this highly-heterogeneous setting.
We show that the.
strategy enables the agents to learn consistently under this highly-heterogeneous setting.
arXiv Detail & Related papers (2021-12-17T12:47:18Z) - Score-Based Change Detection for Gradient-Based Learning Machines [9.670556223243182]
We present a generic score-based change detection method that can detect a change in any number of components of a machine learning model trained via empirical risk minimization.
We establish the consistency of the hypothesis test and show how to calibrate it to achieve a prescribed false alarm rate.
arXiv Detail & Related papers (2021-06-27T01:38:11Z) - Distribution-Free, Risk-Controlling Prediction Sets [112.9186453405701]
We show how to generate set-valued predictions from a black-box predictor that control the expected loss on future test points at a user-specified level.
Our approach provides explicit finite-sample guarantees for any dataset by using a holdout set to calibrate the size of the prediction sets.
arXiv Detail & Related papers (2021-01-07T18:59:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.