On Statistical Learning of Branch and Bound for Vehicle Routing
Optimization
- URL: http://arxiv.org/abs/2310.09986v2
- Date: Tue, 17 Oct 2023 17:50:36 GMT
- Title: On Statistical Learning of Branch and Bound for Vehicle Routing
Optimization
- Authors: Andrew Naguib, Waleed A. Yousef, Issa Traor\'e, Mohammad Mamun
- Abstract summary: We train neural networks to emulate the decision-making process of the computationally expensive Strong Branching strategy.
We find that this approach can match or improve upon the performance of the branch and bound algorithm.
- Score: 3.6922704509753084
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, machine learning of the branch and bound algorithm has shown
promise in approximating competent solutions to NP-hard problems. In this
paper, we utilize and comprehensively compare the outcomes of three neural
networks--graph convolutional neural network (GCNN), GraphSAGE, and graph
attention network (GAT)--to solve the capacitated vehicle routing problem. We
train these neural networks to emulate the decision-making process of the
computationally expensive Strong Branching strategy. The neural networks are
trained on six instances with distinct topologies from the CVRPLIB and
evaluated on eight additional instances. Moreover, we reduced the minimum
number of vehicles required to solve a CVRP instance to a bin-packing problem,
which was addressed in a similar manner. Through rigorous experimentation, we
found that this approach can match or improve upon the performance of the
branch and bound algorithm with the Strong Branching strategy while requiring
significantly less computational time. The source code that corresponds to our
research findings and methodology is readily accessible and available for
reference at the following web address: https://isotlaboratory.github.io/ml4vrp
Related papers
- Neural Networks for Vehicle Routing Problem [0.0]
Route optimization can be viewed as a new challenge for neural networks.
Recent developments in machine learning provide a new toolset, for tackling complex problems.
The main area of application of neural networks is the area of classification and regression.
arXiv Detail & Related papers (2024-09-17T15:45:30Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Sample Complexity of Algorithm Selection Using Neural Networks and Its Applications to Branch-and-Cut [1.4624458429745086]
We build upon recent work in this line of research by considering the setup where, instead of selecting a single algorithm that has the best performance, we allow the possibility of selecting an algorithm based on the instance to be solved.
In particular, given a representative sample of instances, we learn a neural network that maps an instance of the problem to the most appropriate algorithm for that instance.
In other words, the neural network will take as input a mixed-integer optimization instance and output a decision that will result in a small branch-and-cut tree for that instance.
arXiv Detail & Related papers (2024-02-04T03:03:27Z) - Unfolded proximal neural networks for robust image Gaussian denoising [7.018591019975253]
We propose a unified framework to build PNNs for the Gaussian denoising task, based on both the dual-FB and the primal-dual Chambolle-Pock algorithms.
We also show that accelerated versions of these algorithms enable skip connections in the associated NN layers.
arXiv Detail & Related papers (2023-08-06T15:32:16Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - A Continuous Optimisation Benchmark Suite from Neural Network Regression [0.0]
Training neural networks is an optimisation task that has gained prominence with the recent successes of deep learning.
gradient descent variants are by far the most common choice with their trusted good performance on large-scale machine learning tasks.
We contribute CORNN, a suite for benchmarking the performance of any continuous black-box algorithm on neural network training problems.
arXiv Detail & Related papers (2021-09-12T20:24:11Z) - Solve routing problems with a residual edge-graph attention neural
network [10.42872026679616]
In this paper, an end-to-end deep reinforcement learning framework is proposed to solve NP-hard optimization problems.
The proposed framework is aiming to improve the models in literacy in terms of the neural network model and the training algorithm.
Specifically, the average optimality gap is reduced from 4.53% (reported best citeR22) to 3.67% for TSP with 100 nodes and from 7.34% (reported best citeR22) to 6.68% for CVRP with 100 nodes when using the greedy decoding strategy.
arXiv Detail & Related papers (2021-05-06T14:47:47Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z) - Channel Assignment in Uplink Wireless Communication using Machine
Learning Approach [54.012791474906514]
This letter investigates a channel assignment problem in uplink wireless communication systems.
Our goal is to maximize the sum rate of all users subject to integer channel assignment constraints.
Due to high computational complexity, machine learning approaches are employed to obtain computational efficient solutions.
arXiv Detail & Related papers (2020-01-12T15:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.