A Maintenance Planning Framework using Online and Offline Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2208.00808v2
- Date: Tue, 18 Apr 2023 08:17:35 GMT
- Title: A Maintenance Planning Framework using Online and Offline Deep
Reinforcement Learning
- Authors: Zaharah A. Bukhsh, Nils Jansen, Hajo Molegraaf
- Abstract summary: This paper develops a deep reinforcement learning (DRL) solution to automatically determine an optimal rehabilitation policy for deteriorating water pipes.
We train the agent using deep Q-learning (DQN) to learn an optimal policy with minimal average costs and reduced failure probability.
We demonstrate that DRL-based policies improve over standard preventive, corrective, and greedy planning alternatives.
- Score: 4.033107207078282
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cost-effective asset management is an area of interest across several
industries. Specifically, this paper develops a deep reinforcement learning
(DRL) solution to automatically determine an optimal rehabilitation policy for
continuously deteriorating water pipes. We approach the problem of
rehabilitation planning in an online and offline DRL setting. In online DRL,
the agent interacts with a simulated environment of multiple pipes with
distinct lengths, materials, and failure rate characteristics. We train the
agent using deep Q-learning (DQN) to learn an optimal policy with minimal
average costs and reduced failure probability. In offline learning, the agent
uses static data, e.g., DQN replay data, to learn an optimal policy via a
conservative Q-learning algorithm without further interactions with the
environment. We demonstrate that DRL-based policies improve over standard
preventive, corrective, and greedy planning alternatives. Additionally,
learning from the fixed DQN replay dataset in an offline setting further
improves the performance. The results warrant that the existing deterioration
profiles of water pipes consisting of large and diverse states and action
trajectories provide a valuable avenue to learn rehabilitation policies in the
offline setting, which can be further fine-tuned using the simulator.
Related papers
- FORLER: Federated Offline Reinforcement Learning with Q-Ensemble and Actor Rectification [5.423004756752519]
In Internet-of-Things systems, federated learning has advanced online reinforcement learning (RL) by enabling parallel policy training without sharing raw data.<n>We present FORLER, combining Q-ensemble aggregation on the server with actor rectification on devices.<n>The server robustly merges device Q-functions to curb policy pollution and shift heavy computation off resource-constrained hardware without compromising privacy.
arXiv Detail & Related papers (2026-02-02T12:57:09Z) - Offline Retraining for Online RL: Decoupled Policy Learning to Mitigate
Exploration Bias [96.14064037614942]
offline retraining, a policy extraction step at the end of online fine-tuning, is proposed.
An optimistic (exploration) policy is used to interact with the environment, and a separate pessimistic (exploitation) policy is trained on all the observed data for evaluation.
arXiv Detail & Related papers (2023-10-12T17:50:09Z) - Planning to Go Out-of-Distribution in Offline-to-Online Reinforcement Learning [9.341618348621662]
We aim to find the best-performing policy within a limited budget of online interactions.
We first study the major online RL exploration methods based on intrinsic rewards and UCB.
We then introduce an algorithm for planning to go out-of-distribution that avoids these issues.
arXiv Detail & Related papers (2023-10-09T13:47:05Z) - ENOTO: Improving Offline-to-Online Reinforcement Learning with Q-Ensembles [52.34951901588738]
We propose a novel framework called ENsemble-based Offline-To-Online (ENOTO) RL.
By increasing the number of Q-networks, we seamlessly bridge offline pre-training and online fine-tuning without degrading performance.
Experimental results demonstrate that ENOTO can substantially improve the training stability, learning efficiency, and final performance of existing offline RL methods.
arXiv Detail & Related papers (2023-06-12T05:10:10Z) - Finetuning from Offline Reinforcement Learning: Challenges, Trade-offs
and Practical Solutions [30.050083797177706]
offline reinforcement learning (RL) allows for the training of competent agents from offline datasets without any interaction with the environment.
Online finetuning of such offline models can further improve performance.
We show that it is possible to use standard online off-policy algorithms for faster improvement.
arXiv Detail & Related papers (2023-03-30T14:08:31Z) - Adaptive Behavior Cloning Regularization for Stable Offline-to-Online
Reinforcement Learning [80.25648265273155]
Offline reinforcement learning, by learning from a fixed dataset, makes it possible to learn agent behaviors without interacting with the environment.
During online fine-tuning, the performance of the pre-trained agent may collapse quickly due to the sudden distribution shift from offline to online data.
We propose to adaptively weigh the behavior cloning loss during online fine-tuning based on the agent's performance and training stability.
Experiments show that the proposed method yields state-of-the-art offline-to-online reinforcement learning performance on the popular D4RL benchmark.
arXiv Detail & Related papers (2022-10-25T09:08:26Z) - Robust Offline Reinforcement Learning with Gradient Penalty and
Constraint Relaxation [38.95482624075353]
We introduce gradient penalty over the learned value function to tackle the exploding Q-functions.
We then relax the closeness constraints towards non-optimal actions with critic weighted constraint relaxation.
Experimental results show that the proposed techniques effectively tame the non-optimal trajectories for policy constraint offline RL methods.
arXiv Detail & Related papers (2022-10-19T11:22:36Z) - Boosting Offline Reinforcement Learning via Data Rebalancing [104.3767045977716]
offline reinforcement learning (RL) is challenged by the distributional shift between learning policies and datasets.
We propose a simple yet effective method to boost offline RL algorithms based on the observation that resampling a dataset keeps the distribution support unchanged.
We dub our method ReD (Return-based Data Rebalance), which can be implemented with less than 10 lines of code change and adds negligible running time.
arXiv Detail & Related papers (2022-10-17T16:34:01Z) - Constraints Penalized Q-Learning for Safe Offline Reinforcement Learning [15.841609263723575]
We study the problem of safe offline reinforcement learning (RL)
The goal is to learn a policy that maximizes long-term reward while satisfying safety constraints given only offline data, without further interaction with the environment.
We show that na"ive approaches that combine techniques from safe RL and offline RL can only learn sub-optimal solutions.
arXiv Detail & Related papers (2021-07-19T16:30:14Z) - Offline-to-Online Reinforcement Learning via Balanced Replay and
Pessimistic Q-Ensemble [135.6115462399788]
Deep offline reinforcement learning has made it possible to train strong robotic agents from offline datasets.
State-action distribution shift may lead to severe bootstrap error during fine-tuning.
We propose a balanced replay scheme that prioritizes samples encountered online while also encouraging the use of near-on-policy samples.
arXiv Detail & Related papers (2021-07-01T16:26:54Z) - Critic Regularized Regression [70.8487887738354]
We propose a novel offline RL algorithm to learn policies from data using a form of critic-regularized regression (CRR)
We find that CRR performs surprisingly well and scales to tasks with high-dimensional state and action spaces.
arXiv Detail & Related papers (2020-06-26T17:50:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.