NeuralQP: A General Hypergraph-based Optimization Framework for Large-scale QCQPs
- URL: http://arxiv.org/abs/2410.03720v1
- Date: Sat, 28 Sep 2024 10:42:47 GMT
- Title: NeuralQP: A General Hypergraph-based Optimization Framework for Large-scale QCQPs
- Authors: Zhixiao Xiong, Fangyu Zong, Huigen Ye, Hua Xu,
- Abstract summary: This paper introduces NeuralQP, a general hypergraph-based framework for large-scale Quadratically Constrained Quadratic Programs (QCQPs)
We prove that our framework UniEGNN with our hypergraph representation is equivalent to the Interior-Point Method (IPM) for quadratic programming.
Experiments on two benchmark problems and large-scale real-world instances from QPLIB demonstrate that NeuralQP outperforms state-of-the-art solvers.
- Score: 8.503330120957052
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine Learning (ML) optimization frameworks have gained attention for their ability to accelerate the optimization of large-scale Quadratically Constrained Quadratic Programs (QCQPs) by learning shared problem structures. However, existing ML frameworks often rely heavily on strong problem assumptions and large-scale solvers. This paper introduces NeuralQP, a general hypergraph-based framework for large-scale QCQPs. NeuralQP features two main components: Hypergraph-based Neural Prediction, which generates embeddings and predicted solutions for QCQPs without problem assumptions, and Parallel Neighborhood Optimization, which employs a McCormick relaxation-based repair strategy to identify and correct illegal variables, iteratively improving the solution with a small-scale solver. We further prove that our framework UniEGNN with our hypergraph representation is equivalent to the Interior-Point Method (IPM) for quadratic programming. Experiments on two benchmark problems and large-scale real-world instances from QPLIB demonstrate that NeuralQP outperforms state-of-the-art solvers (e.g., Gurobi and SCIP) in both solution quality and time efficiency, further validating the efficiency of ML optimization frameworks for QCQPs.
Related papers
- Enhancing Multi-Step Reasoning Abilities of Language Models through Direct Q-Function Optimization [50.485788083202124]
Reinforcement Learning (RL) plays a crucial role in aligning large language models with human preferences and improving their ability to perform complex tasks.
We introduce Direct Q-function Optimization (DQO), which formulates the response generation process as a Markov Decision Process (MDP) and utilizes the soft actor-critic (SAC) framework to optimize a Q-function directly parameterized by the language model.
Experimental results on two math problem-solving datasets, GSM8K and MATH, demonstrate that DQO outperforms previous methods, establishing it as a promising offline reinforcement learning approach for aligning language models.
arXiv Detail & Related papers (2024-10-11T23:29:20Z) - Differentiation Through Black-Box Quadratic Programming Solvers [16.543673072027136]
We introduce dQP, a modular framework that enables plug-and-play differentiation for any quadratic programming (QP) solver.
Our solution is based on the core theoretical insight that knowledge of the active constraint set at the QP optimum allows for explicit differentiation.
Our implementation, which will be made publicly available, interfaces with an existing framework that supports over 15 state-of-the-art QP solvers.
arXiv Detail & Related papers (2024-10-08T20:01:39Z) - Benchmarking Quantum Annealers with Near-Optimal Minor-Embedded Instances [0.0]
This paper establishes a new protocol to generate graph instances with their associated near-optimal minor-embedding mappings to D-Wave Quantum Annealers.
We use this method to benchmark QA on large instances of unconstrained and constrained optimization problems and compare the performance of the QPU with efficient classical solvers.
arXiv Detail & Related papers (2024-05-02T15:19:39Z) - Black-Litterman Portfolio Optimization with Noisy Intermediate-Scale
Quantum Computers [0.14732811715354452]
We demonstrate a practical application of noisy intermediate-scale quantum (NISQ) algorithms to enhance subroutines in the Black-Litterman (BL) portfolio optimization model.
As a proof of concept, we implement a 12-qubit example for selecting 6 assets out of a 12-asset pool.
arXiv Detail & Related papers (2023-12-01T19:42:04Z) - Multi-objective Quantum Annealing approach for solving flexible job shop
scheduling in manufacturing [0.0]
This paper introduces Quantum Annealing-based solving algorithm (QASA) to address Flexible Job Shop Scheduling problem.
Experiments on benchmark problems show QASA, combining tabu search, simulated annealing, and Quantum Annealing, outperforms a classical solving algorithm (CSA) in solution quality.
arXiv Detail & Related papers (2023-11-16T07:45:57Z) - Pointer Networks with Q-Learning for Combinatorial Optimization [55.2480439325792]
We introduce the Pointer Q-Network (PQN), a hybrid neural architecture that integrates model-free Q-value policy approximation with Pointer Networks (Ptr-Nets)
Our empirical results demonstrate the efficacy of this approach, also testing the model in unstable environments.
arXiv Detail & Related papers (2023-11-05T12:03:58Z) - Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models [88.80146574509195]
Quantization is a promising approach for reducing memory overhead and accelerating inference.
We propose a novel-aware quantization (ZSAQ) framework for the zero-shot quantization of various PLMs.
arXiv Detail & Related papers (2023-10-20T07:09:56Z) - End-to-end resource analysis for quantum interior point methods and portfolio optimization [63.4863637315163]
We provide a complete quantum circuit-level description of the algorithm from problem input to problem output.
We report the number of logical qubits and the quantity/depth of non-Clifford T-gates needed to run the algorithm.
arXiv Detail & Related papers (2022-11-22T18:54:48Z) - On the Global Convergence of Fitted Q-Iteration with Two-layer Neural
Network Parametrization [33.12181620473604]
We study a Fitted Q-Iteration with two-layer ReLU neural network parametrization, and find the sample complexity guarantees for the algorithm.
We show that this approach achieves a sample complexity of $tildemathcalO (1/epsilon2)$, which is order-optimal.
arXiv Detail & Related papers (2022-11-14T19:00:24Z) - Collaborative Intelligent Reflecting Surface Networks with Multi-Agent
Reinforcement Learning [63.83425382922157]
Intelligent reflecting surface (IRS) is envisioned to be widely applied in future wireless networks.
In this paper, we investigate a multi-user communication system assisted by cooperative IRS devices with the capability of energy harvesting.
arXiv Detail & Related papers (2022-03-26T20:37:14Z) - Scaling Quantum Approximate Optimization on Near-term Hardware [49.94954584453379]
We quantify scaling of the expected resource requirements by optimized circuits for hardware architectures with varying levels of connectivity.
We show the number of measurements, and hence total time to synthesizing solution, grows exponentially in problem size and problem graph degree.
These problems may be alleviated by increasing hardware connectivity or by recently proposed modifications to the QAOA that achieve higher performance with fewer circuit layers.
arXiv Detail & Related papers (2022-01-06T21:02:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.