Reinforcement Learning to Solve NP-hard Problems: an Application to the
CVRP
- URL: http://arxiv.org/abs/2201.05393v1
- Date: Fri, 14 Jan 2022 11:16:17 GMT
- Title: Reinforcement Learning to Solve NP-hard Problems: an Application to the
CVRP
- Authors: Leo Ardon
- Abstract summary: We evaluate the use of Reinforcement Learning (RL) to solve a classic optimization problem.
We compare two of the most promising RL approaches with traditional solving techniques on a set of benchmark instances.
We find that despite not returning the best solution, the RL approach has many advantages over traditional solvers.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we evaluate the use of Reinforcement Learning (RL) to solve a
classic combinatorial optimization problem: the Capacitated Vehicle Routing
Problem (CVRP). We formalize this problem in the RL framework and compare two
of the most promising RL approaches with traditional solving techniques on a
set of benchmark instances. We measure the different approaches with the
quality of the solution returned and the time required to return it. We found
that despite not returning the best solution, the RL approach has many
advantages over traditional solvers. First, the versatility of the framework
allows the resolution of more complex combinatorial problems. Moreover, instead
of trying to solve a specific instance of the problem, the RL algorithm learns
the skills required to solve the problem. The trained policy can then quasi
instantly provide a solution to an unseen problem without having to solve it
from scratch. Finally, the use of trained models makes the RL solver by far the
fastest, and therefore make this approach more suited for commercial use where
the user experience is paramount. Techniques like Knowledge Transfer can also
be used to improve the training efficiency of the algorithm and help solve
bigger and more complex problems.
Related papers
- Multiobjective Vehicle Routing Optimization with Time Windows: A Hybrid Approach Using Deep Reinforcement Learning and NSGA-II [52.083337333478674]
This paper proposes a weight-aware deep reinforcement learning (WADRL) approach designed to address the multiobjective vehicle routing problem with time windows (MOVRPTW)
The Non-dominated sorting genetic algorithm-II (NSGA-II) method is then employed to optimize the outcomes produced by the WADRL.
arXiv Detail & Related papers (2024-07-18T02:46:06Z) - Transform then Explore: a Simple and Effective Technique for Exploratory Combinatorial Optimization with Reinforcement Learning [11.531786269804707]
We propose a gauge transformation (GT) technique to solve optimization problems (COPs) over graphs.
GT is very simple, which can be implemented with less than 10 lines of Python codes, and can be applied to a vast majority of reinforcement learning models.
We show that traditional RL models with GT technique produce the state-of-the-art performances on the MaxCut problem.
arXiv Detail & Related papers (2024-04-06T15:31:17Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Solving the vehicle routing problem with deep reinforcement learning [0.0]
This paper focuses on the application of RL for the Vehicle Routing Problem (VRP), a famous problem that belongs to the class of NP-Hard problems.
In a second phase, the neural architecture behind the Actor and Critic has been established, choosing to adopt a neural architecture based on the Convolutional neural networks.
Experiments performed on a wide range of instances show that the algorithm has good generalization capabilities and can reach good solutions in a short time.
arXiv Detail & Related papers (2022-07-30T12:34:26Z) - Meta Reinforcement Learning with Successor Feature Based Context [51.35452583759734]
We propose a novel meta-RL approach that achieves competitive performance comparing to existing meta-RL algorithms.
Our method does not only learn high-quality policies for multiple tasks simultaneously but also can quickly adapt to new tasks with a small amount of training.
arXiv Detail & Related papers (2022-07-29T14:52:47Z) - Solving the Traveling Salesperson Problem with Precedence Constraints by
Deep Reinforcement Learning [59.14935871979047]
This work presents solutions to the Traveling Salesperson Problem with precedence constraints (TSPPC) using Deep Reinforcement Learning (DRL)
Common to these approaches is the use of graph models based on multi-head attention layers.
arXiv Detail & Related papers (2022-07-04T14:31:47Z) - Solving the capacitated vehicle routing problem with timing windows
using rollouts and MAX-SAT [4.873362301533824]
Vehicle routing is a well known class of NP-hard optimisation problems.
Recent work in reinforcement learning has been a promising alternative approach.
This paper proposes a hybrid approach that combines reinforcement learning, policy rollouts, and a satisfiability.
arXiv Detail & Related papers (2022-06-14T06:27:09Z) - Sample-Efficient, Exploration-Based Policy Optimisation for Routing
Problems [2.6782615615913348]
This paper presents a new reinforcement learning approach that is based on entropy.
In addition, we design an off-policy-based reinforcement learning technique that maximises the expected return.
We show that our model can generalise to various route problems.
arXiv Detail & Related papers (2022-05-31T09:51:48Z) - Combining Pessimism with Optimism for Robust and Efficient Model-Based
Deep Reinforcement Learning [56.17667147101263]
In real-world tasks, reinforcement learning agents encounter situations that are not present during training time.
To ensure reliable performance, the RL agents need to exhibit robustness against worst-case situations.
We propose the Robust Hallucinated Upper-Confidence RL (RH-UCRL) algorithm to provably solve this problem.
arXiv Detail & Related papers (2021-03-18T16:50:17Z) - SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep
Reinforcement Learning [102.78958681141577]
We present SUNRISE, a simple unified ensemble method, which is compatible with various off-policy deep reinforcement learning algorithms.
SUNRISE integrates two key ingredients: (a) ensemble-based weighted Bellman backups, which re-weight target Q-values based on uncertainty estimates from a Q-ensemble, and (b) an inference method that selects actions using the highest upper-confidence bounds for efficient exploration.
arXiv Detail & Related papers (2020-07-09T17:08:44Z) - Reinforcement Learning for Combinatorial Optimization: A Survey [12.323976053967066]
Many traditional algorithms for solving optimization problems involve using hand-crafteds that sequentially construct a solution.
Reinforcement learning (RL) proposes a good alternative to automate the search of theses by training an agent in a supervised or self-supervised manner.
arXiv Detail & Related papers (2020-03-07T16:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.