Interpretable Predictive Maintenance for Hard Drives
- URL: http://arxiv.org/abs/2102.06509v1
- Date: Fri, 12 Feb 2021 13:25:58 GMT
- Title: Interpretable Predictive Maintenance for Hard Drives
- Authors: Maxime Amram, Jack Dunn, Jeremy J. Toledano, Ying Daisy Zhuo
- Abstract summary: We consider the task of predicting hard drive failure in a data center using recent algorithms for interpretable machine learning.
We demonstrate that these methods provide meaningful insights about short- and long-term drive health, while also maintaining high predictive performance.
- Score: 0.5352699766206808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing machine learning approaches for data-driven predictive maintenance
are usually black boxes that claim high predictive power yet cannot be
understood by humans. This limits the ability of humans to use these models to
derive insights and understanding of the underlying failure mechanisms, and
also limits the degree of confidence that can be placed in such a system to
perform well on future data. We consider the task of predicting hard drive
failure in a data center using recent algorithms for interpretable machine
learning. We demonstrate that these methods provide meaningful insights about
short- and long-term drive health, while also maintaining high predictive
performance. We also show that these analyses still deliver useful insights
even when limited historical data is available, enabling their use in
situations where data collection has only recently begun.
Related papers
- Enhancing End-to-End Autonomous Driving with Latent World Model [78.22157677787239]
We propose a novel self-supervised method to enhance end-to-end driving without the need for costly labels.
Our framework textbfLAW uses a LAtent World model to predict future latent features based on the predicted ego actions and the latent feature of the current frame.
As a result, our approach achieves state-of-the-art performance in both open-loop and closed-loop benchmarks without costly annotations.
arXiv Detail & Related papers (2024-06-12T17:59:21Z) - Explainable Predictive Maintenance: A Survey of Current Methods,
Challenges and Opportunities [2.913761513290171]
Methods allow maintainers of systems and hardware to reduce financial and time costs of upkeep.
This attracts the field of Explainable AI (XAI) to introduce explainability and interpretability into the predictive system.
XAI brings methods to the field of predictive maintenance that can amplify trust in the users while maintaining well-performing systems.
arXiv Detail & Related papers (2024-01-15T18:06:59Z) - Robust Machine Learning by Transforming and Augmenting Imperfect
Training Data [6.928276018602774]
This thesis explores several data sensitivities of modern machine learning.
We first discuss how to prevent ML from codifying prior human discrimination measured in the training data.
We then discuss the problem of learning from data containing spurious features, which provide predictive fidelity during training but are unreliable upon deployment.
arXiv Detail & Related papers (2023-12-19T20:49:28Z) - Is Self-Supervised Pretraining Good for Extrapolation in Molecular
Property Prediction? [16.211138511816642]
In material science, the prediction of unobserved values, commonly referred to as extrapolation, is critical for property prediction.
We propose an experimental framework for the demonstration and empirically reveal that while models were unable to accurately extrapolate absolute property values, self-supervised pretraining enables them to learn relative tendencies of unobserved property values.
arXiv Detail & Related papers (2023-08-16T03:38:43Z) - Interpretable Machine Learning for Discovery: Statistical Challenges \&
Opportunities [1.2891210250935146]
We discuss and review the field of interpretable machine learning.
We outline the types of discoveries that can be made using Interpretable Machine Learning.
We focus on the grand challenge of how to validate these discoveries in a data-driven manner.
arXiv Detail & Related papers (2023-08-02T23:57:31Z) - Re-thinking Data Availablity Attacks Against Deep Neural Networks [53.64624167867274]
In this paper, we re-examine the concept of unlearnable examples and discern that the existing robust error-minimizing noise presents an inaccurate optimization objective.
We introduce a novel optimization paradigm that yields improved protection results with reduced computational time requirements.
arXiv Detail & Related papers (2023-05-18T04:03:51Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Understanding the World Through Action [91.3755431537592]
I will argue that a general, principled, and powerful framework for utilizing unlabeled data can be derived from reinforcement learning.
I will discuss how such a procedure is more closely aligned with potential downstream tasks.
arXiv Detail & Related papers (2021-10-24T22:33:52Z) - Hessian-based toolbox for reliable and interpretable machine learning in
physics [58.720142291102135]
We present a toolbox for interpretability and reliability, extrapolation of the model architecture.
It provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an agnostic score for the model predictions.
Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
arXiv Detail & Related papers (2021-08-04T16:32:59Z) - Injecting Knowledge in Data-driven Vehicle Trajectory Predictors [82.91398970736391]
Vehicle trajectory prediction tasks have been commonly tackled from two perspectives: knowledge-driven or data-driven.
In this paper, we propose to learn a "Realistic Residual Block" (RRB) which effectively connects these two perspectives.
Our proposed method outputs realistic predictions by confining the residual range and taking into account its uncertainty.
arXiv Detail & Related papers (2021-03-08T16:03:09Z) - A Safety Framework for Critical Systems Utilising Deep Neural Networks [13.763070043077633]
This paper presents a principled novel safety argument framework for critical systems that utilise deep neural networks.
The approach allows various forms of predictions, e.g., future reliability of passing some demands, or confidence on a required reliability level.
It is supported by a Bayesian analysis using operational data and the recent verification and validation techniques for deep learning.
arXiv Detail & Related papers (2020-03-07T23:35:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.