NeuralQP: A General Hypergraph-based Optimization Framework for Large-scale QCQPs
- URL: http://arxiv.org/abs/2410.03720v1
- Date: Sat, 28 Sep 2024 10:42:47 GMT
- Title: NeuralQP: A General Hypergraph-based Optimization Framework for Large-scale QCQPs
- Authors: Zhixiao Xiong, Fangyu Zong, Huigen Ye, Hua Xu,
- Abstract summary: This paper introduces NeuralQP, a general hypergraph-based framework for large-scale Quadratically Constrained Quadratic Programs (QCQPs)
We prove that our framework UniEGNN with our hypergraph representation is equivalent to the Interior-Point Method (IPM) for quadratic programming.
Experiments on two benchmark problems and large-scale real-world instances from QPLIB demonstrate that NeuralQP outperforms state-of-the-art solvers.
- Score: 8.503330120957052
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine Learning (ML) optimization frameworks have gained attention for their ability to accelerate the optimization of large-scale Quadratically Constrained Quadratic Programs (QCQPs) by learning shared problem structures. However, existing ML frameworks often rely heavily on strong problem assumptions and large-scale solvers. This paper introduces NeuralQP, a general hypergraph-based framework for large-scale QCQPs. NeuralQP features two main components: Hypergraph-based Neural Prediction, which generates embeddings and predicted solutions for QCQPs without problem assumptions, and Parallel Neighborhood Optimization, which employs a McCormick relaxation-based repair strategy to identify and correct illegal variables, iteratively improving the solution with a small-scale solver. We further prove that our framework UniEGNN with our hypergraph representation is equivalent to the Interior-Point Method (IPM) for quadratic programming. Experiments on two benchmark problems and large-scale real-world instances from QPLIB demonstrate that NeuralQP outperforms state-of-the-art solvers (e.g., Gurobi and SCIP) in both solution quality and time efficiency, further validating the efficiency of ML optimization frameworks for QCQPs.
Related papers
- A Quantum Genetic Algorithm Framework for the MaxCut Problem [49.59986385400411]
The proposed method introduces a Quantum Genetic Algorithm (QGA) using a Grover-based evolutionary framework and divide-and-conquer principles.
On complete graphs, the proposed method consistently achieves the true optimal MaxCut values, outperforming the Semidefinite Programming (SDP) approach.
On ErdHos-R'enyi random graphs, the QGA demonstrates competitive performance, achieving median solutions within 92-96% of the SDP results.
arXiv Detail & Related papers (2025-01-02T05:06:16Z) - Deep Distributed Optimization for Large-Scale Quadratic Programming [15.773581194857085]
This paper introduces a novel deep learning-aided distributed optimization architecture designed for tackling large-scale Quadratic programming (QP) problems.
DeepDistributedQP demonstrates strong generalization by training on small problems and scaling to solve much larger ones (up to 50K variables and 150K constraints) using the same policy.
arXiv Detail & Related papers (2024-12-11T09:45:00Z) - An Efficient Unsupervised Framework for Convex Quadratic Programs via Deep Unrolling [33.4994203652726]
We focus on unrolling "PDQP", a PDHG algorithm specialized for convex Quadratic programs (QPs)
We propose a neural network model called "PDQP-net" to learn optimal QP solutions.
We show that PDQP-net trained in this unsupervised manner can effectively approximate optimal QP solutions.
arXiv Detail & Related papers (2024-12-02T02:22:44Z) - Enhancing Multi-Step Reasoning Abilities of Language Models through Direct Q-Function Optimization [49.362750475706235]
Reinforcement Learning (RL) plays a crucial role in aligning large language models with human preferences and improving their ability to perform complex tasks.
We introduce Direct Q-function Optimization (DQO), which formulates the response generation process as a Markov Decision Process (MDP) and utilizes the soft actor-critic (SAC) framework to optimize a Q-function directly parameterized by the language model.
Experimental results on two math problem-solving datasets, GSM8K and MATH, demonstrate that DQO outperforms previous methods, establishing it as a promising offline reinforcement learning approach for aligning language models.
arXiv Detail & Related papers (2024-10-11T23:29:20Z) - Differentiation Through Black-Box Quadratic Programming Solvers [16.543673072027136]
We introduce dQP, a modular framework that enables plug-and-play differentiation for any quadratic programming (QP) solver.
Our solution is based on the core theoretical insight that knowledge of the active constraint set at the QP optimum allows for explicit differentiation.
Our implementation, which will be made publicly available, interfaces with an existing framework that supports over 15 state-of-the-art QP solvers.
arXiv Detail & Related papers (2024-10-08T20:01:39Z) - Benchmarking Quantum Annealers with Near-Optimal Minor-Embedded Instances [0.0]
This paper establishes a new protocol to generate graph instances with their associated near-optimal minor-embedding mappings to D-Wave Quantum Annealers.
We use this method to benchmark QA on large instances of unconstrained and constrained optimization problems and compare the performance of the QPU with efficient classical solvers.
arXiv Detail & Related papers (2024-05-02T15:19:39Z) - Multi-objective Quantum Annealing approach for solving flexible job shop
scheduling in manufacturing [0.0]
This paper introduces Quantum Annealing-based solving algorithm (QASA) to address Flexible Job Shop Scheduling problem.
Experiments on benchmark problems show QASA, combining tabu search, simulated annealing, and Quantum Annealing, outperforms a classical solving algorithm (CSA) in solution quality.
arXiv Detail & Related papers (2023-11-16T07:45:57Z) - Pointer Networks with Q-Learning for Combinatorial Optimization [55.2480439325792]
We introduce the Pointer Q-Network (PQN), a hybrid neural architecture that integrates model-free Q-value policy approximation with Pointer Networks (Ptr-Nets)
Our empirical results demonstrate the efficacy of this approach, also testing the model in unstable environments.
arXiv Detail & Related papers (2023-11-05T12:03:58Z) - Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models [88.80146574509195]
Quantization is a promising approach for reducing memory overhead and accelerating inference.
We propose a novel-aware quantization (ZSAQ) framework for the zero-shot quantization of various PLMs.
arXiv Detail & Related papers (2023-10-20T07:09:56Z) - Collaborative Intelligent Reflecting Surface Networks with Multi-Agent
Reinforcement Learning [63.83425382922157]
Intelligent reflecting surface (IRS) is envisioned to be widely applied in future wireless networks.
In this paper, we investigate a multi-user communication system assisted by cooperative IRS devices with the capability of energy harvesting.
arXiv Detail & Related papers (2022-03-26T20:37:14Z) - Scaling Quantum Approximate Optimization on Near-term Hardware [49.94954584453379]
We quantify scaling of the expected resource requirements by optimized circuits for hardware architectures with varying levels of connectivity.
We show the number of measurements, and hence total time to synthesizing solution, grows exponentially in problem size and problem graph degree.
These problems may be alleviated by increasing hardware connectivity or by recently proposed modifications to the QAOA that achieve higher performance with fewer circuit layers.
arXiv Detail & Related papers (2022-01-06T21:02:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.