Supplementing Recurrent Neural Networks with Annealing to Solve
Combinatorial Optimization Problems
- URL: http://arxiv.org/abs/2207.08189v2
- Date: Thu, 26 Oct 2023 23:49:16 GMT
- Title: Supplementing Recurrent Neural Networks with Annealing to Solve
Combinatorial Optimization Problems
- Authors: Shoummo Ahsan Khandoker, Jawaril Munshad Abedin, Mohamed Hibat-Allah
- Abstract summary: In this paper, we demonstrate the potential of using variational classical annealing (VCA) as an approach to solving real-world optimization problems.
We find that VCA outperforms simulated annealing (SA) on average in the limit by one or more orders of magnitude in terms of relative error.
We conclude that in the best case scenario, VCA can serve as a great alternative when SA fails to find the optimal solution.
- Score: 2.3642391095636484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combinatorial optimization problems can be solved by heuristic algorithms
such as simulated annealing (SA) which aims to find the optimal solution within
a large search space through thermal fluctuations. The algorithm generates new
solutions through Markov-chain Monte Carlo techniques. This sampling scheme can
result in severe limitations, such as slow convergence and a tendency to stay
within the same local search space at small temperatures. To overcome these
shortcomings, we use the variational classical annealing (VCA) framework that
combines autoregressive recurrent neural networks (RNNs) with traditional
annealing to sample solutions that are uncorrelated. In this paper, we
demonstrate the potential of using VCA as an approach to solving real-world
optimization problems. We explore VCA's performance in comparison with SA at
solving three popular optimization problems: the maximum cut problem (Max-Cut),
the nurse scheduling problem (NSP), and the traveling salesman problem (TSP).
For all three problems, we find that VCA outperforms SA on average in the
asymptotic limit by one or more orders of magnitude in terms of relative error.
Interestingly, we reach large system sizes of up to $256$ cities for the TSP.
We also conclude that in the best case scenario, VCA can serve as a great
alternative when SA fails to find the optimal solution.
Related papers
- Learning Multiple Initial Solutions to Optimization Problems [52.9380464408756]
Sequentially solving similar optimization problems under strict runtime constraints is essential for many applications.
We propose learning to predict emphmultiple diverse initial solutions given parameters that define the problem instance.
We find significant and consistent improvement with our method across all evaluation settings and demonstrate that it efficiently scales with the number of initial solutions required.
arXiv Detail & Related papers (2024-11-04T15:17:19Z) - Learning to Solve Combinatorial Optimization under Positive Linear Constraints via Non-Autoregressive Neural Networks [103.78912399195005]
Combinatorial optimization (CO) is the fundamental problem at the intersection of computer science, applied mathematics, etc.
In this paper, we design a family of non-autoregressive neural networks to solve CO problems under positive linear constraints.
We validate the effectiveness of this framework in solving representative CO problems including facility location, max-set covering, and traveling salesman problem.
arXiv Detail & Related papers (2024-09-06T14:58:31Z) - BalMCTS: Balancing Objective Function and Search Nodes in MCTS for
Constraint Optimization Problems [7.196057722218442]
Constraint Problems Optimization (COP) pose intricate challenges in problems usually addressed through Branch and Bound (B&B) methods.
We propose a novel neural network algorithm based on a depth-first search algorithm for solving COP.
Our method identifies feasible solutions with a gap of less than 17.63% within the initial 5 feasible solutions.
arXiv Detail & Related papers (2023-12-26T03:09:08Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Let the Flows Tell: Solving Graph Combinatorial Optimization Problems
with GFlowNets [86.43523688236077]
Combinatorial optimization (CO) problems are often NP-hard and out of reach for exact algorithms.
GFlowNets have emerged as a powerful machinery to efficiently sample from composite unnormalized densities sequentially.
In this paper, we design Markov decision processes (MDPs) for different problems and propose to train conditional GFlowNets to sample from the solution space.
arXiv Detail & Related papers (2023-05-26T15:13:09Z) - Learning to Solve the AC Optimal Power Flow via a Lagrangian Approach [9.561589138108811]
We use a Lagrangian-based approach to ACOPF problems.
We show that our approach is able to obtain the globally optimal cost even when the training solutions are suboptimal.
arXiv Detail & Related papers (2021-10-04T18:29:08Z) - Contrastive Losses and Solution Caching for Predict-and-Optimize [19.31153168397003]
We use a Noise Contrastive approach to motivate a family of surrogate loss functions.
We address a major bottleneck of all predict-and-optimize approaches.
We show that even a very slow growth rate is enough to match the quality of state-of-the-art methods.
arXiv Detail & Related papers (2020-11-10T19:09:12Z) - A Hybrid Framework Using a QUBO Solver For Permutation-Based
Combinatorial Optimization [5.460573052311485]
We propose a hybrid framework to solve large-scale permutation-based problems using a high-performance quadratic unconstrained binary optimization solver.
We propose techniques to overcome the challenges in using a QUBO solver that typically comes with limited numbers of bits.
arXiv Detail & Related papers (2020-09-27T07:15:25Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Anytime Behavior of Inexact TSP Solvers and Perspectives for Automated
Algorithm Selection [0.0]
Traveling-Salesperson-Problem (TSP) is arguably one of the best-known NP-hard optimization problems.
We extend existing benchmarking studies by addressing anytime behaviour of inexact TSP solvers.
It turns out that performance ranking of solvers is highly dependent on the focused approximation quality.
arXiv Detail & Related papers (2020-05-27T11:36:53Z) - GACEM: Generalized Autoregressive Cross Entropy Method for Multi-Modal
Black Box Constraint Satisfaction [69.94831587339539]
We present a modified Cross-Entropy Method (CEM) that uses a masked auto-regressive neural network for modeling uniform distributions over the solution space.
Our algorithm is able to express complicated solution spaces, thus allowing it to track a variety of different solution regions.
arXiv Detail & Related papers (2020-02-17T20:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.