Deep Policy Dynamic Programming for Vehicle Routing Problems
- URL: http://arxiv.org/abs/2102.11756v1
- Date: Tue, 23 Feb 2021 15:33:57 GMT
- Title: Deep Policy Dynamic Programming for Vehicle Routing Problems
- Authors: Wouter Kool, Herke van Hoof, Joaquim Gromicho and Max Welling
- Abstract summary: We propose Deep Policy Dynamic Programming (D PDP) to combine the strengths of learned neurals with those of dynamic programming algorithms.
D PDP prioritizes and restricts the DP state space using a policy derived from a deep neural network, which is trained to predict edges from example solutions.
We evaluate our framework on the travelling salesman problem (TSP) and the vehicle routing problem (VRP) and show that the neural policy improves the performance of (restricted) DP algorithms.
- Score: 89.96386273895985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Routing problems are a class of combinatorial problems with many practical
applications. Recently, end-to-end deep learning methods have been proposed to
learn approximate solution heuristics for such problems. In contrast, classical
dynamic programming (DP) algorithms can find optimal solutions, but scale badly
with the problem size. We propose Deep Policy Dynamic Programming (DPDP), which
aims to combine the strengths of learned neural heuristics with those of DP
algorithms. DPDP prioritizes and restricts the DP state space using a policy
derived from a deep neural network, which is trained to predict edges from
example solutions. We evaluate our framework on the travelling salesman problem
(TSP) and the vehicle routing problem (VRP) and show that the neural policy
improves the performance of (restricted) DP algorithms, making them competitive
to strong alternatives such as LKH, while also outperforming other `neural
approaches' for solving TSPs and VRPs with 100 nodes.
Related papers
- Learn to Solve Vehicle Routing Problems ASAP: A Neural Optimization Approach for Time-Constrained Vehicle Routing Problems with Finite Vehicle Fleet [0.0]
We propose an NCO approach to solve a time-constrained capacitated VRP with a finite vehicle fleet size.
The method is able to find adequate and cost-efficient solutions, showing both flexibility and robust generalizations.
arXiv Detail & Related papers (2024-11-07T15:16:36Z) - Two-Stage ML-Guided Decision Rules for Sequential Decision Making under Uncertainty [55.06411438416805]
Sequential Decision Making under Uncertainty (SDMU) is ubiquitous in many domains such as energy, finance, and supply chains.
Some SDMU are naturally modeled as Multistage Problems (MSPs) but the resulting optimizations are notoriously challenging from a computational standpoint.
This paper introduces a novel approach Two-Stage General Decision Rules (TS-GDR) to generalize the policy space beyond linear functions.
The effectiveness of TS-GDR is demonstrated through an instantiation using Deep Recurrent Neural Networks named Two-Stage Deep Decision Rules (TS-LDR)
arXiv Detail & Related papers (2024-05-23T18:19:47Z) - An Efficient Learning-based Solver Comparable to Metaheuristics for the
Capacitated Arc Routing Problem [67.92544792239086]
We introduce an NN-based solver to significantly narrow the gap with advanced metaheuristics.
First, we propose direction-aware facilitating attention model (DaAM) to incorporate directionality into the embedding process.
Second, we design a supervised reinforcement learning scheme that involves supervised pre-training to establish a robust initial policy.
arXiv Detail & Related papers (2024-03-11T02:17:42Z) - Genetic Algorithms with Neural Cost Predictor for Solving Hierarchical Vehicle Routing Problems [20.684353068460375]
When vehicle routing decisions are intertwined with higher-level decisions, the resulting optimization problems pose significant challenges for computation.
We propose a novel deep-learning-based approach called Genetic Algorithm with Neural Cost Predictor (GANCP) to tackle the challenge.
In particular, our proposed neural network learns the objective values of the HGS-CVRP open-source package that solves capacitated vehicle routing problems.
arXiv Detail & Related papers (2023-10-22T02:46:37Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - Solving the vehicle routing problem with deep reinforcement learning [0.0]
This paper focuses on the application of RL for the Vehicle Routing Problem (VRP), a famous problem that belongs to the class of NP-Hard problems.
In a second phase, the neural architecture behind the Actor and Critic has been established, choosing to adopt a neural architecture based on the Convolutional neural networks.
Experiments performed on a wide range of instances show that the algorithm has good generalization capabilities and can reach good solutions in a short time.
arXiv Detail & Related papers (2022-07-30T12:34:26Z) - Processing Network Controls via Deep Reinforcement Learning [0.0]
dissertation is concerned with theoretical justification and practical application of the advanced policy gradient algorithms.
Policy improvement bounds play a crucial role in the theoretical justification of the APG algorithms.
arXiv Detail & Related papers (2022-05-01T04:34:21Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.