Reinforcement Learning based Condition-oriented Maintenance Scheduling
for Flow Line Systems
- URL: http://arxiv.org/abs/2108.12298v1
- Date: Fri, 27 Aug 2021 14:21:07 GMT
- Title: Reinforcement Learning based Condition-oriented Maintenance Scheduling
for Flow Line Systems
- Authors: Raphael Lamprecht, Ferdinand Wurst, Marco F. Huber
- Abstract summary: This paper introduces a deep reinforcement learning approach for condition-oriented maintenance scheduling in flow line systems.
Different policies are learned, analyzed and evaluated against a benchmark scheduling based on reward modelling.
The evaluation of the learned policies shows that reinforcement learning based maintenance strategies meet the requirements of the presented use case and are suitable for maintenance scheduling in the shop floor.
- Score: 31.64715462538063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Maintenance scheduling is a complex decision-making problem in the production
domain, where a number of maintenance tasks and resources has to be assigned
and scheduled to production entities in order to prevent unplanned production
downtime. Intelligent maintenance strategies are required that are able to
adapt to the dynamics and different conditions of production systems. The paper
introduces a deep reinforcement learning approach for condition-oriented
maintenance scheduling in flow line systems. Different policies are learned,
analyzed and evaluated against a benchmark scheduling heuristic based on reward
modelling. The evaluation of the learned policies shows that reinforcement
learning based maintenance strategies meet the requirements of the presented
use case and are suitable for maintenance scheduling in the shop floor.
Related papers
- LLM-R: A Framework for Domain-Adaptive Maintenance Scheme Generation Combining Hierarchical Agents and RAG [7.864939415613373]
This paper proposes a Maintenance Scheme Generation Method based on Large Language Models (LLM-R)
The proposed method includes several key innovations.
The experimental results show that the accuracy of the maintenance schemes generated by the proposed method reached 91.59%.
arXiv Detail & Related papers (2024-11-07T07:07:34Z) - Learning-enabled Flexible Job-shop Scheduling for Scalable Smart
Manufacturing [11.509669981978874]
In smart manufacturing systems, flexible job-shop scheduling with transportation constraints is essential to optimize solutions for maximizing productivity.
Recent developments in deep reinforcement learning (DRL)-based methods for FJSPT have encountered a scale generalization challenge.
We introduce a novel graph-based DRL method, named the Heterogeneous Graph Scheduler (HGS)
arXiv Detail & Related papers (2024-02-14T06:49:23Z) - TranDRL: A Transformer-Driven Deep Reinforcement Learning Enabled Prescriptive Maintenance Framework [58.474610046294856]
Industrial systems demand reliable predictive maintenance strategies to enhance operational efficiency and reduce downtime.
This paper introduces an integrated framework that leverages the capabilities of the Transformer model-based neural networks and deep reinforcement learning (DRL) algorithms to optimize system maintenance actions.
arXiv Detail & Related papers (2023-09-29T02:27:54Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Reinforcement and Deep Reinforcement Learning-based Solutions for
Machine Maintenance Planning, Scheduling Policies, and Optimization [1.6447597767676658]
This paper presents a literature review on the applications of reinforcement and deep reinforcement learning for maintenance planning and optimization problems.
By leveraging the condition monitoring data of systems and machines with reinforcement learning, smart maintenance planners can be developed, which is a precursor to achieving a smart factory.
arXiv Detail & Related papers (2023-07-07T22:47:29Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Continual Learning for Predictive Maintenance: Overview and Challenges [6.620789302906817]
We present a brief introduction to predictive maintenance, non-stationary environments, and continual learning.
We then discuss the current challenges of both predictive maintenance and continual learning, proposing future directions at the intersection of both areas.
arXiv Detail & Related papers (2023-01-29T15:32:53Z) - Learning Goal-Conditioned Policies Offline with Self-Supervised Reward
Shaping [94.89128390954572]
We propose a novel self-supervised learning phase on the pre-collected dataset to understand the structure and the dynamics of the model.
We evaluate our method on three continuous control tasks, and show that our model significantly outperforms existing approaches.
arXiv Detail & Related papers (2023-01-05T15:07:10Z) - Evaluating Disentanglement in Generative Models Without Knowledge of
Latent Factors [71.79984112148865]
We introduce a method for ranking generative models based on the training dynamics exhibited during learning.
Inspired by recent theoretical characterizations of disentanglement, our method does not require supervision of the underlying latent factors.
arXiv Detail & Related papers (2022-10-04T17:27:29Z) - Online Constrained Model-based Reinforcement Learning [13.362455603441552]
Key requirement is the ability to handle continuous state and action spaces while remaining within a limited time and resource budget.
We propose a model based approach that combines Gaussian Process regression and Receding Horizon Control.
We test our approach on a cart pole swing-up environment and demonstrate the benefits of online learning on an autonomous racing task.
arXiv Detail & Related papers (2020-04-07T15:51:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.