A Graph Neural Network-Based QUBO-Formulated Hamiltonian-Inspired Loss
Function for Combinatorial Optimization using Reinforcement Learning
- URL: http://arxiv.org/abs/2308.13978v2
- Date: Thu, 28 Sep 2023 13:08:22 GMT
- Title: A Graph Neural Network-Based QUBO-Formulated Hamiltonian-Inspired Loss
Function for Combinatorial Optimization using Reinforcement Learning
- Authors: Redwan Ahmed Rizvee, Md. Mosaddek Khan
- Abstract summary: The Hamiltonian function is often used to formulate QUBO problems where it is used as the objective function in the context of optimization.
PI-GNN, a generic scalable framework, has been proposed to address the Combinatorial Optimization (CO) problems over graphs based on a simple Graph Neural Network (GNN) architecture.
In our experiments, we observed up to 44% improvement in the RL-based setup compared to the PI-GNN algorithm.
- Score: 1.566437882635872
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quadratic Unconstrained Binary Optimization (QUBO) is a generic technique to
model various NP-hard combinatorial optimization problems in the form of binary
variables. The Hamiltonian function is often used to formulate QUBO problems
where it is used as the objective function in the context of optimization.
Recently, PI-GNN, a generic scalable framework, has been proposed to address
the Combinatorial Optimization (CO) problems over graphs based on a simple
Graph Neural Network (GNN) architecture. Their novel contribution was a generic
QUBO-formulated Hamiltonian-inspired loss function that was optimized using
GNN. In this study, we address a crucial issue related to the aforementioned
setup especially observed in denser graphs. The reinforcement learning-based
paradigm has also been widely used to address numerous CO problems. Here we
also formulate and empirically evaluate the compatibility of the
QUBO-formulated Hamiltonian as the generic reward function in the Reinforcement
Learning paradigm to directly integrate the actual node projection status
during training as the form of rewards. In our experiments, we observed up to
44% improvement in the RL-based setup compared to the PI-GNN algorithm. Our
implementation can be found in
https://github.com/rizveeredwan/learning-graph-structure.
Related papers
- Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Enhancing GNNs Performance on Combinatorial Optimization by Recurrent Feature Update [0.09986418756990156]
We introduce a novel algorithm, denoted hereafter as QRF-GNN, leveraging the power of GNNs to efficiently solve Combinatorial optimization (CO) problems.
It relies on unsupervised learning by minimizing the loss function derived from QUBO relaxation.
Results of experiments show that QRF-GNN drastically surpasses existing learning-based approaches and is comparable to the state-of-the-art conventionals.
arXiv Detail & Related papers (2024-07-23T13:34:35Z) - Decision-focused Graph Neural Networks for Combinatorial Optimization [62.34623670845006]
An emerging strategy to tackle optimization problems involves the adoption of graph neural networks (GNNs) as an alternative to traditional algorithms.
Despite the growing popularity of GNNs and traditional algorithm solvers in the realm of CO, there is limited research on their integrated use and the correlation between them within an end-to-end framework.
We introduce a decision-focused framework that utilizes GNNs to address CO problems with auxiliary support.
arXiv Detail & Related papers (2024-06-05T22:52:27Z) - Combinatorial Optimization with Automated Graph Neural Networks [28.19349828026972]
We present a new class of textbfAUTOmated textbfGNNs for solving NP-hard CO problems, namely textbfAutoGNP.
The idea of AutoGNP is to use graph neural architecture search algorithms to automatically find the best GNNs for a given NP-hard optimization problem.
arXiv Detail & Related papers (2024-06-05T02:43:41Z) - Hamiltonian-based Quantum Reinforcement Learning for Neural Combinatorial Optimization [2.536162003546062]
We introduce Hamiltonian-based Quantum Reinforcement Learning (QRL) an approach at the intersection of Quantum Computing (QC) and Neuralial Optimization (NCO)
Our ansatzes show favourable trainability properties when compared to the hardware efficient ansatzes, while also not being limited to graph-based problems, unlike previous works.
In this work, we evaluate the performance of Hamiltonian-based QRL on a diverse set of optimization problems to demonstrate the broad applicability of our approach and compare it to QAOA.
arXiv Detail & Related papers (2024-05-13T14:36:22Z) - A Graph Neural Network-Based QUBO-Formulated Hamiltonian-Inspired Loss
Function for Combinatorial Optimization using Reinforcement Learning [1.325953054381901]
We introduce a novel Monty Carlo Tree Search-based strategy with Graph Neural Network (GNN)
We identify a behavioral pattern related to PI-GNN and devise strategies to improve its performance.
We also focus on creating a bridge between the RL-based solutions and the QUBO-formulated Hamiltonian.
arXiv Detail & Related papers (2023-11-27T19:33:14Z) - GPN: A Joint Structural Learning Framework for Graph Neural Networks [36.38529113603987]
We propose a GNN-based joint learning framework that simultaneously learns the graph structure and the downstream task.
Our method is the first GNN-based bilevel optimization framework for resolving this task.
arXiv Detail & Related papers (2022-05-12T09:06:04Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.