Learning Vehicle Routing Problems using Policy Optimisation
- URL: http://arxiv.org/abs/2012.13269v1
- Date: Thu, 24 Dec 2020 14:18:56 GMT
- Title: Learning Vehicle Routing Problems using Policy Optimisation
- Authors: Nasrin Sultana, Jeffrey Chan, A. K. Qin, Tabinda Sarwar
- Abstract summary: State-of-the-art approaches learn a policy using reinforcement learning, and the learnt policy acts as a pseudo solver.
These approaches have demonstrated good performance in some cases, but given the large search space typical of routing problem, they can converge too quickly to poor policy.
We propose entropy regularised reinforcement learning (ERRL) that supports exploration by providing more policies.
- Score: 4.093722933440819
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep reinforcement learning (DRL) has been used to learn effective heuristics
for solving complex combinatorial optimisation problem via policy networks and
have demonstrated promising performance. Existing works have focused on solving
(vehicle) routing problems as they have a nice balance between non-triviality
and difficulty. State-of-the-art approaches learn a policy using reinforcement
learning, and the learnt policy acts as a pseudo solver. These approaches have
demonstrated good performance in some cases, but given the large search space
typical combinatorial/routing problem, they can converge too quickly to poor
policy. To prevent this, in this paper, we propose an approach name entropy
regularised reinforcement learning (ERRL) that supports exploration by
providing more stochastic policies, which tends to improve optimisation.
Empirically, the low variance ERRL offers RL training fast and stable. We also
introduce a combination of local search operators during test time, which
significantly improves solution and complement ERRL. We qualitatively
demonstrate that for vehicle routing problems, a policy with higher entropy can
make the optimisation landscape smooth which makes it easier to optimise. The
quantitative evaluation shows that the performance of the model is comparable
with the state-of-the-art variants. In our evaluation, we experimentally
illustrate that the model produces state-of-the-art performance on variants of
Vehicle Routing problems such as Capacitated Vehicle Routing Problem (CVRP),
Multiple Routing with Fixed Fleet Problems (MRPFF) and Travelling Salesman
problem.
Related papers
- Learn to Solve Vehicle Routing Problems ASAP: A Neural Optimization Approach for Time-Constrained Vehicle Routing Problems with Finite Vehicle Fleet [0.0]
We propose an NCO approach to solve a time-constrained capacitated VRP with a finite vehicle fleet size.
The method is able to find adequate and cost-efficient solutions, showing both flexibility and robust generalizations.
arXiv Detail & Related papers (2024-11-07T15:16:36Z) - Rethinking Optimal Transport in Offline Reinforcement Learning [64.56896902186126]
In offline reinforcement learning, the data is provided by various experts and some of them can be sub-optimal.
To extract an efficient policy, it is necessary to emphstitch the best behaviors from the dataset.
We present an algorithm that aims to find a policy that maps states to a emphpartial distribution of the best expert actions for each given state.
arXiv Detail & Related papers (2024-10-17T22:36:43Z) - Joint Demonstration and Preference Learning Improves Policy Alignment with Human Feedback [58.049113055986375]
We develop a single stage approach named Alignment with Integrated Human Feedback (AIHF) to train reward models and the policy.
The proposed approach admits a suite of efficient algorithms, which can easily reduce to, and leverage, popular alignment algorithms.
We demonstrate the efficiency of the proposed solutions with extensive experiments involving alignment problems in LLMs and robotic control problems in MuJoCo.
arXiv Detail & Related papers (2024-06-11T01:20:53Z) - Reinforcement Learning for Solving Stochastic Vehicle Routing Problem [0.09831489366502298]
This study addresses a gap in the utilization of Reinforcement Learning (RL) and Machine Learning (ML) techniques in solving the Vehicle Routing Problem (SVRP)
We propose a novel end-to-end framework that comprehensively addresses the key sources of SVRP and utilizes an RL agent with a simple yet effective architecture and a tailored training method.
Our proposed model demonstrates superior performance compared to a widely adopted state-of-the-art meeuristic, achieving a significant 3.43% reduction in travel costs.
arXiv Detail & Related papers (2023-11-13T19:46:22Z) - Combinatorial Optimization enriched Machine Learning to solve the
Dynamic Vehicle Routing Problem with Time Windows [5.4807970361321585]
We propose a novel machine learning pipeline that incorporates an optimization layer.
We apply this pipeline to a dynamic vehicle routing problem with waves, which was recently promoted in the EURO Meets NeurIPS Competition at NeurIPS 2022.
Our methodology ranked first in this competition, outperforming all other approaches in solving the proposed dynamic vehicle routing problem.
arXiv Detail & Related papers (2023-04-03T08:23:09Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Online Control of Adaptive Large Neighborhood Search using Deep Reinforcement Learning [4.374837991804085]
We introduce a Deep Reinforcement Learning based approach called DR-ALNS that selects operators, adjusts parameters, and controls the acceptance criterion throughout the search.
We evaluate the proposed method on a problem with orienteering weights and time windows, as presented in an IJCAI competition.
The results show that our approach outperforms vanilla ALNS, ALNS tuned with Bayesian optimization, and two state-of-the-art DRL approaches.
arXiv Detail & Related papers (2022-11-01T21:33:46Z) - Sample-Efficient, Exploration-Based Policy Optimisation for Routing
Problems [2.6782615615913348]
This paper presents a new reinforcement learning approach that is based on entropy.
In addition, we design an off-policy-based reinforcement learning technique that maximises the expected return.
We show that our model can generalise to various route problems.
arXiv Detail & Related papers (2022-05-31T09:51:48Z) - Combining Pessimism with Optimism for Robust and Efficient Model-Based
Deep Reinforcement Learning [56.17667147101263]
In real-world tasks, reinforcement learning agents encounter situations that are not present during training time.
To ensure reliable performance, the RL agents need to exhibit robustness against worst-case situations.
We propose the Robust Hallucinated Upper-Confidence RL (RH-UCRL) algorithm to provably solve this problem.
arXiv Detail & Related papers (2021-03-18T16:50:17Z) - Deep Policy Dynamic Programming for Vehicle Routing Problems [89.96386273895985]
We propose Deep Policy Dynamic Programming (D PDP) to combine the strengths of learned neurals with those of dynamic programming algorithms.
D PDP prioritizes and restricts the DP state space using a policy derived from a deep neural network, which is trained to predict edges from example solutions.
We evaluate our framework on the travelling salesman problem (TSP) and the vehicle routing problem (VRP) and show that the neural policy improves the performance of (restricted) DP algorithms.
arXiv Detail & Related papers (2021-02-23T15:33:57Z) - Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot
Locomotion [78.46388769788405]
We introduce guided constrained policy optimization (GCPO), an RL framework based upon our implementation of constrained policy optimization (CPPO)
We show that guided constrained RL offers faster convergence close to the desired optimum resulting in an optimal, yet physically feasible, robotic control behavior without the need for precise reward function tuning.
arXiv Detail & Related papers (2020-02-22T10:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.