Reinforcement Learning for Scalable Train Timetable Rescheduling with
Graph Representation
- URL: http://arxiv.org/abs/2401.06952v1
- Date: Sat, 13 Jan 2024 02:14:35 GMT
- Title: Reinforcement Learning for Scalable Train Timetable Rescheduling with
Graph Representation
- Authors: Peng Yue, Yaochu Jin, Xuewu Dai, Zhenhua Feng, Dongliang Cui
- Abstract summary: Train timetable rescheduling (TTR) aims to promptly restore the original operation of trains after disturbances or disruptions.
This study proposes a reinforcement learning-based approach to TTR, which makes the following contributions compared to existing work.
- Score: 28.5828807787632
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Train timetable rescheduling (TTR) aims to promptly restore the original
operation of trains after unexpected disturbances or disruptions. Currently,
this work is still done manually by train dispatchers, which is challenging to
maintain performance under various problem instances. To mitigate this issue,
this study proposes a reinforcement learning-based approach to TTR, which makes
the following contributions compared to existing work. First, we design a
simple directed graph to represent the TTR problem, enabling the automatic
extraction of informative states through graph neural networks. Second, we
reformulate the construction process of TTR's solution, not only decoupling the
decision model from the problem size but also ensuring the generated scheme's
feasibility. Third, we design a learning curriculum for our model to handle the
scenarios with different levels of delay. Finally, a simple local search method
is proposed to assist the learned decision model, which can significantly
improve solution quality with little additional computation cost, further
enhancing the practical value of our method. Extensive experimental results
demonstrate the effectiveness of our method. The learned decision model can
achieve better performance for various problems with varying degrees of train
delay and different scales when compared to handcrafted rules and
state-of-the-art solvers.
Related papers
- Contrastive-Adversarial and Diffusion: Exploring pre-training and fine-tuning strategies for sulcal identification [3.0398616939692777]
Techniques like adversarial learning, contrastive learning, diffusion denoising learning, and ordinary reconstruction learning have become standard.
The study aims to elucidate the advantages of pre-training techniques and fine-tuning strategies to enhance the learning process of neural networks.
arXiv Detail & Related papers (2024-05-29T15:44:51Z) - Liquid Neural Network-based Adaptive Learning vs. Incremental Learning for Link Load Prediction amid Concept Drift due to Network Failures [37.66676003679306]
Adapting to concept drift is a challenging task in machine learning.
In communication networks, such issue emerges when performing traffic forecasting following afailure event.
We propose an approach that exploits adaptive learning algorithms, namely, liquid neural networks, which are capable of self-adaptation to abrupt changes in data patterns without requiring any retraining.
arXiv Detail & Related papers (2024-04-08T08:47:46Z) - Deep Reinforcement Learning for Picker Routing Problem in Warehousing [0.6562256987706128]
We introduce an attention based neural network for modeling picker tours, which is trained using Reinforcement Learning.
A key advantage of our proposed method is its ability to offer an option to reduce the perceived complexity of routes.
arXiv Detail & Related papers (2024-02-05T21:25:45Z) - Task Arithmetic with LoRA for Continual Learning [0.0]
We propose a novel method to continually train vision models using low-rank adaptation and task arithmetic.
When aided with a small memory of 10 samples per class, our method achieves performance close to full-set finetuning.
arXiv Detail & Related papers (2023-11-04T15:12:24Z) - Supervised Pretraining Can Learn In-Context Reinforcement Learning [96.62869749926415]
In this paper, we study the in-context learning capabilities of transformers in decision-making problems.
We introduce and study Decision-Pretrained Transformer (DPT), a supervised pretraining method where the transformer predicts an optimal action.
We find that the pretrained transformer can be used to solve a range of RL problems in-context, exhibiting both exploration online and conservatism offline.
arXiv Detail & Related papers (2023-06-26T17:58:50Z) - Simplified Temporal Consistency Reinforcement Learning [19.814047499837084]
We show that a simple representation learning approach relying on a latent dynamics model trained by latent temporal consistency is sufficient for high-performance RL.
Our approach outperforms model-free methods by a large margin and matches model-based methods' sample efficiency while training 2.4 times faster.
arXiv Detail & Related papers (2023-06-15T19:37:43Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Learning to Reweight Imaginary Transitions for Model-Based Reinforcement
Learning [58.66067369294337]
When the model is inaccurate or biased, imaginary trajectories may be deleterious for training the action-value and policy functions.
We adaptively reweight the imaginary transitions, so as to reduce the negative effects of poorly generated trajectories.
Our method outperforms state-of-the-art model-based and model-free RL algorithms on multiple tasks.
arXiv Detail & Related papers (2021-04-09T03:13:35Z) - Sufficiently Accurate Model Learning for Planning [119.80502738709937]
This paper introduces the constrained Sufficiently Accurate model learning approach.
It provides examples of such problems, and presents a theorem on how close some approximate solutions can be.
The approximate solution quality will depend on the function parameterization, loss and constraint function smoothness, and the number of samples in model learning.
arXiv Detail & Related papers (2021-02-11T16:27:31Z) - Unsupervised Transfer Learning for Spatiotemporal Predictive Networks [90.67309545798224]
We study how to transfer knowledge from a zoo of unsupervisedly learned models towards another network.
Our motivation is that models are expected to understand complex dynamics from different sources.
Our approach yields significant improvements on three benchmarks fortemporal prediction, and benefits the target even from less relevant ones.
arXiv Detail & Related papers (2020-09-24T15:40:55Z) - Deep Unfolding Network for Image Super-Resolution [159.50726840791697]
This paper proposes an end-to-end trainable unfolding network which leverages both learning-based methods and model-based methods.
The proposed network inherits the flexibility of model-based methods to super-resolve blurry, noisy images for different scale factors via a single model.
arXiv Detail & Related papers (2020-03-23T17:55:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.