Solving the QAP by Two-Stage Graph Pointer Networks and Reinforcement Learning
- URL: http://arxiv.org/abs/2404.00539v1
- Date: Sun, 31 Mar 2024 03:01:56 GMT
- Title: Solving the QAP by Two-Stage Graph Pointer Networks and Reinforcement Learning
- Authors: Satoko Iida, Ryota Yasudo,
- Abstract summary: Quadratic Assignment Problem (QAP) is a practical optimization problems that has been studied for several years.
Deep reinforcement learning model called the two-stage graph pointer network (GPN) for solving QAP.
Two-stage GPN relies on GPN, which has been proposed for Euclidean Traveling Salesman Problem (TSP)
- Score: 0.22099217573031676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quadratic Assignment Problem (QAP) is a practical combinatorial optimization problems that has been studied for several years. Since it is NP-hard, solving large problem instances of QAP is challenging. Although heuristics can find semi-optimal solutions, the execution time significantly increases as the problem size increases. Recently, solving combinatorial optimization problems by deep learning has been attracting attention as a faster solver than heuristics. Even with deep learning, however, solving large QAP is still challenging. In this paper, we propose the deep reinforcement learning model called the two-stage graph pointer network (GPN) for solving QAP. Two-stage GPN relies on GPN, which has been proposed for Euclidean Traveling Salesman Problem (TSP). First, we extend GPN for general TSP, and then we add new algorithms to that model for solving QAP. Our experimental results show that our two-stage GPN provides semi-optimal solutions for benchmark problem instances from TSPlib and QAPLIB.
Related papers
- Sum-of-Squares inspired Quantum Metaheuristic for Polynomial Optimization with the Hadamard Test and Approximate Amplitude Constraints [76.53316706600717]
Recently proposed quantum algorithm arXiv:2206.14999 is based on semidefinite programming (SDP)
We generalize the SDP-inspired quantum algorithm to sum-of-squares.
Our results show that our algorithm is suitable for large problems and approximate the best known classicals.
arXiv Detail & Related papers (2024-08-14T19:04:13Z) - Towards Less Greedy Quantum Coalition Structure Generation in Induced Subgraph Games [3.6021182997326022]
The transition to 100% renewable energy requires new techniques for managing energy networks, such as dividing them into sensible subsets of prosumers called micro-grids.
Doing so in an optimal manner is a difficult optimization problem, as it can be abstracted to the Coalition Structure Generation problem in Induced Subgraph Games.
This work proposes several less greedy QA-based approaches and investigates whether any of them can outperform GCS-Q in terms of solution quality.
arXiv Detail & Related papers (2024-08-08T10:54:56Z) - Learning Solution-Aware Transformers for Efficiently Solving Quadratic Assignment Problem [27.33966993065248]
This work focuses on learning-based solutions for efficiently solving the Quadratic Assignment Problem (QAPs)
Current research on QAPs suffer from limited scale and inefficiency.
We propose the first solution of its kind for QAP in the learn-to-improve category.
arXiv Detail & Related papers (2024-06-14T10:15:03Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - iPINNs: Incremental learning for Physics-informed neural networks [66.4795381419701]
Physics-informed neural networks (PINNs) have recently become a powerful tool for solving partial differential equations (PDEs)
We propose incremental PINNs that can learn multiple tasks sequentially without additional parameters for new tasks and improve performance for every equation in the sequence.
Our approach learns multiple PDEs starting from the simplest one by creating its own subnetwork for each PDE and allowing each subnetwork to overlap with previously learnedworks.
arXiv Detail & Related papers (2023-04-10T20:19:20Z) - On the Global Convergence of Fitted Q-Iteration with Two-layer Neural
Network Parametrization [33.12181620473604]
We study a Fitted Q-Iteration with two-layer ReLU neural network parametrization, and find the sample complexity guarantees for the algorithm.
We show that this approach achieves a sample complexity of $tildemathcalO (1/epsilon2)$, which is order-optimal.
arXiv Detail & Related papers (2022-11-14T19:00:24Z) - Learning to Detect Critical Nodes in Sparse Graphs via Feature Importance Awareness [53.351863569314794]
The critical node problem (CNP) aims to find a set of critical nodes from a network whose deletion maximally degrades the pairwise connectivity of the residual network.
This work proposes a feature importance-aware graph attention network for node representation.
It combines it with dueling double deep Q-network to create an end-to-end algorithm to solve CNP for the first time.
arXiv Detail & Related papers (2021-12-03T14:23:05Z) - Learning the Markov Decision Process in the Sparse Gaussian Elimination [0.0]
We propose a learning-based approach for the sparse Gaussian Elimination.
We proposed some Q-Learning algorithms for the main modules of sparse solver.
arXiv Detail & Related papers (2021-09-30T08:56:39Z) - Accelerating Quadratic Optimization with Reinforcement Learning [39.64039435793601]
We show how Reinforcement Learning can learn a policy to tune parameters to accelerate convergence.
Our policy, RLQP, significantly outperforms state-of-the-art QP solvers by up to 3x.
RLQP generalizes surprisingly well to previously unseen problems with varying dimension and structure from different applications.
arXiv Detail & Related papers (2021-07-22T17:59:10Z) - Q-Match: Iterative Shape Matching via Quantum Annealing [64.74942589569596]
Finding shape correspondences can be formulated as an NP-hard quadratic assignment problem (QAP)
This paper proposes Q-Match, a new iterative quantum method for QAPs inspired by the alpha-expansion algorithm.
Q-Match can be applied for shape matching problems iteratively, on a subset of well-chosen correspondences, allowing us to scale to real-world problems.
arXiv Detail & Related papers (2021-05-06T17:59:38Z) - Deep Policy Dynamic Programming for Vehicle Routing Problems [89.96386273895985]
We propose Deep Policy Dynamic Programming (D PDP) to combine the strengths of learned neurals with those of dynamic programming algorithms.
D PDP prioritizes and restricts the DP state space using a policy derived from a deep neural network, which is trained to predict edges from example solutions.
We evaluate our framework on the travelling salesman problem (TSP) and the vehicle routing problem (VRP) and show that the neural policy improves the performance of (restricted) DP algorithms.
arXiv Detail & Related papers (2021-02-23T15:33:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.