A Graph Neural Network-Based QUBO-Formulated Hamiltonian-Inspired Loss
Function for Combinatorial Optimization using Reinforcement Learning
- URL: http://arxiv.org/abs/2311.16277v1
- Date: Mon, 27 Nov 2023 19:33:14 GMT
- Title: A Graph Neural Network-Based QUBO-Formulated Hamiltonian-Inspired Loss
Function for Combinatorial Optimization using Reinforcement Learning
- Authors: Redwan Ahmed Rizvee, Raheeb Hasan and Md. Mosaddek Khan
- Abstract summary: We introduce a novel Monty Carlo Tree Search-based strategy with Graph Neural Network (GNN)
We identify a behavioral pattern related to PI-GNN and devise strategies to improve its performance.
We also focus on creating a bridge between the RL-based solutions and the QUBO-formulated Hamiltonian.
- Score: 1.325953054381901
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quadratic Unconstrained Binary Optimization (QUBO) is a generic technique to
model various NP-hard Combinatorial Optimization problems (CO) in the form of
binary variables. Ising Hamiltonian is used to model the energy function of a
system. QUBO to Ising Hamiltonian is regarded as a technique to solve various
canonical optimization problems through quantum optimization algorithms.
Recently, PI-GNN, a generic framework, has been proposed to address CO problems
over graphs based on Graph Neural Network (GNN) architecture. They introduced a
generic QUBO-formulated Hamiltonian-inspired loss function that was directly
optimized using GNN. PI-GNN is highly scalable but there lies a noticeable
decrease in the number of satisfied constraints when compared to
problem-specific algorithms and becomes more pronounced with increased graph
densities. Here, We identify a behavioral pattern related to it and devise
strategies to improve its performance. Another group of literature uses
Reinforcement learning (RL) to solve the aforementioned NP-hard problems using
problem-specific reward functions. In this work, we also focus on creating a
bridge between the RL-based solutions and the QUBO-formulated Hamiltonian. We
formulate and empirically evaluate the compatibility of the QUBO-formulated
Hamiltonian as the generic reward function in the RL-based paradigm in the form
of rewards. Furthermore, we also introduce a novel Monty Carlo Tree
Search-based strategy with GNN where we apply a guided search through manual
perturbation of node labels during training. We empirically evaluated our
methods and observed up to 44% improvement in the number of constraint
violations compared to the PI-GNN.
Related papers
- Enhancing GNNs Performance on Combinatorial Optimization by Recurrent Feature Update [0.09986418756990156]
We introduce a novel algorithm, denoted hereafter as QRF-GNN, leveraging the power of GNNs to efficiently solve Combinatorial optimization (CO) problems.
It relies on unsupervised learning by minimizing the loss function derived from QUBO relaxation.
Results of experiments show that QRF-GNN drastically surpasses existing learning-based approaches and is comparable to the state-of-the-art conventionals.
arXiv Detail & Related papers (2024-07-23T13:34:35Z) - Decision-focused Graph Neural Networks for Combinatorial Optimization [62.34623670845006]
An emerging strategy to tackle optimization problems involves the adoption of graph neural networks (GNNs) as an alternative to traditional algorithms.
Despite the growing popularity of GNNs and traditional algorithm solvers in the realm of CO, there is limited research on their integrated use and the correlation between them within an end-to-end framework.
We introduce a decision-focused framework that utilizes GNNs to address CO problems with auxiliary support.
arXiv Detail & Related papers (2024-06-05T22:52:27Z) - Combinatorial Optimization with Automated Graph Neural Networks [28.19349828026972]
We present a new class of textbfAUTOmated textbfGNNs for solving NP-hard CO problems, namely textbfAutoGNP.
The idea of AutoGNP is to use graph neural architecture search algorithms to automatically find the best GNNs for a given NP-hard optimization problem.
arXiv Detail & Related papers (2024-06-05T02:43:41Z) - Hamiltonian-based Quantum Reinforcement Learning for Neural Combinatorial Optimization [2.536162003546062]
We introduce Hamiltonian-based Quantum Reinforcement Learning (QRL) an approach at the intersection of Quantum Computing (QC) and Neuralial Optimization (NCO)
Our ansatzes show favourable trainability properties when compared to the hardware efficient ansatzes, while also not being limited to graph-based problems, unlike previous works.
In this work, we evaluate the performance of Hamiltonian-based QRL on a diverse set of optimization problems to demonstrate the broad applicability of our approach and compare it to QAOA.
arXiv Detail & Related papers (2024-05-13T14:36:22Z) - A Graph Neural Network-Based QUBO-Formulated Hamiltonian-Inspired Loss
Function for Combinatorial Optimization using Reinforcement Learning [1.566437882635872]
The Hamiltonian function is often used to formulate QUBO problems where it is used as the objective function in the context of optimization.
PI-GNN, a generic scalable framework, has been proposed to address the Combinatorial Optimization (CO) problems over graphs based on a simple Graph Neural Network (GNN) architecture.
In our experiments, we observed up to 44% improvement in the RL-based setup compared to the PI-GNN algorithm.
arXiv Detail & Related papers (2023-08-27T00:57:01Z) - On Representing Linear Programs by Graph Neural Networks [30.998508318941017]
Graph neural network (GNN) is considered a suitable machine learning model for optimization problems.
This paper establishes the theoretical foundation of applying GNNs to solving linear program (LP) problems.
We show that properly built GNNs can reliably predict feasibility, boundedness, and an optimal solution for each LP in a broad class.
arXiv Detail & Related papers (2022-09-25T18:27:33Z) - Neural Improvement Heuristics for Graph Combinatorial Optimization
Problems [49.85111302670361]
We introduce a novel Neural Improvement (NI) model capable of handling graph-based problems where information is encoded in the nodes, edges, or both.
The presented model serves as a fundamental component for hill-climbing-based algorithms that guide the selection of neighborhood operations for each.
arXiv Detail & Related papers (2022-06-01T10:35:29Z) - Meta-Solver for Neural Ordinary Differential Equations [77.8918415523446]
We investigate how the variability in solvers' space can improve neural ODEs performance.
We show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
arXiv Detail & Related papers (2021-03-15T17:26:34Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.