Continual Learning for Predictive Maintenance: Overview and Challenges
- URL: http://arxiv.org/abs/2301.12467v2
- Date: Thu, 29 Jun 2023 08:55:12 GMT
- Title: Continual Learning for Predictive Maintenance: Overview and Challenges
- Authors: Julio Hurtado and Dario Salvati and Rudy Semola and Mattia Bosio and
Vincenzo Lomonaco
- Abstract summary: We present a brief introduction to predictive maintenance, non-stationary environments, and continual learning.
We then discuss the current challenges of both predictive maintenance and continual learning, proposing future directions at the intersection of both areas.
- Score: 6.620789302906817
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning techniques have become one of the main propellers for solving
engineering problems effectively and efficiently. For instance, Predictive
Maintenance methods have been used to improve predictions of when maintenance
is needed on different machines and operative contexts. However, deep learning
methods are not without limitations, as these models are normally trained on a
fixed distribution that only reflects the current state of the problem. Due to
internal or external factors, the state of the problem can change, and the
performance decreases due to the lack of generalization and adaptation.
Contrary to this stationary training set, real-world applications change their
environments constantly, creating the need to constantly adapt the model to
evolving scenarios. To aid in this endeavor, Continual Learning methods propose
ways to constantly adapt prediction models and incorporate new knowledge after
deployment. Despite the advantages of these techniques, there are still
challenges to applying them to real-world problems. In this work, we present a
brief introduction to predictive maintenance, non-stationary environments, and
continual learning, together with an extensive review of the current state of
applying continual learning in real-world applications and specifically in
predictive maintenance. We then discuss the current challenges of both
predictive maintenance and continual learning, proposing future directions at
the intersection of both areas. Finally, we propose a novel way to create
benchmarks that favor the application of continuous learning methods in more
realistic environments, giving specific examples of predictive maintenance.
Related papers
- Temporal-Difference Variational Continual Learning [89.32940051152782]
A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks.
In Continual Learning settings, models often struggle to balance learning new tasks with retaining previous knowledge.
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - A Practitioner's Guide to Continual Multimodal Pretraining [83.63894495064855]
Multimodal foundation models serve numerous applications at the intersection of vision and language.
To keep models updated, research into continual pretraining mainly explores scenarios with either infrequent, indiscriminate updates on large-scale new data, or frequent, sample-level updates.
We introduce FoMo-in-Flux, a continual multimodal pretraining benchmark with realistic compute constraints and practical deployment requirements.
arXiv Detail & Related papers (2024-08-26T17:59:01Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Continual Learning with Pretrained Backbones by Tuning in the Input
Space [44.97953547553997]
The intrinsic difficulty in adapting deep learning models to non-stationary environments limits the applicability of neural networks to real-world tasks.
We propose a novel strategy to make the fine-tuning procedure more effective, by avoiding to update the pre-trained part of the network and learning not only the usual classification head, but also a set of newly-introduced learnable parameters.
arXiv Detail & Related papers (2023-06-05T15:11:59Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - PIVOT: Prompting for Video Continual Learning [50.80141083993668]
We introduce PIVOT, a novel method that leverages extensive knowledge in pre-trained models from the image domain.
Our experiments show that PIVOT improves state-of-the-art methods by a significant 27% on the 20-task ActivityNet setup.
arXiv Detail & Related papers (2022-12-09T13:22:27Z) - Continual Predictive Learning from Videos [100.27176974654559]
We study a new continual learning problem in the context of video prediction.
We propose the continual predictive learning (CPL) approach, which learns a mixture world model via predictive experience replay.
We construct two new benchmarks based on RoboNet and KTH, in which different tasks correspond to different physical robotic environments or human actions.
arXiv Detail & Related papers (2022-04-12T08:32:26Z) - Continually Learning Self-Supervised Representations with Projected
Functional Regularization [39.92600544186844]
Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised methods.
These methods are unable to acquire new knowledge incrementally -- they are, in fact, mostly used only as a pre-training phase with IID data.
To prevent forgetting of previous knowledge, we propose the usage of functional regularization.
arXiv Detail & Related papers (2021-12-30T11:59:23Z) - Online Constrained Model-based Reinforcement Learning [13.362455603441552]
Key requirement is the ability to handle continuous state and action spaces while remaining within a limited time and resource budget.
We propose a model based approach that combines Gaussian Process regression and Receding Horizon Control.
We test our approach on a cart pole swing-up environment and demonstrate the benefits of online learning on an autonomous racing task.
arXiv Detail & Related papers (2020-04-07T15:51:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.