Differentiation Through Black-Box Quadratic Programming Solvers
- URL: http://arxiv.org/abs/2410.06324v4
- Date: Thu, 30 Oct 2025 15:07:47 GMT
- Title: Differentiation Through Black-Box Quadratic Programming Solvers
- Authors: Connor W. Magoon, Fengyu Yang, Noam Aigerman, Shahar Z. Kovalsky,
- Abstract summary: Differentiable optimization has attracted significant research interest, particularly for quadratic programming (QP)<n>We introduce dQP, a modular and solver-agnostic framework for plug-and-play differentiation of virtually any QP solver.<n>Building on this result, we provide a minimal-overhead, open-source implementation that seamlessly integrates with over 15 state-of-the-art solvers.
- Score: 21.717766458737426
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differentiable optimization has attracted significant research interest, particularly for quadratic programming (QP). Existing approaches for differentiating the solution of a QP with respect to its defining parameters often rely on specific integrated solvers. This integration limits their applicability, including their use in neural network architectures and bi-level optimization tasks, restricting users to a narrow selection of solver choices. To address this limitation, we introduce dQP, a modular and solver-agnostic framework for plug-and-play differentiation of virtually any QP solver. A key insight we leverage to achieve modularity is that, once the active set of inequality constraints is known, both the solution and its derivative can be expressed using simplified linear systems that share the same matrix. This formulation fully decouples the computation of the QP solution from its differentiation. Building on this result, we provide a minimal-overhead, open-source implementation ( https://github.com/cwmagoon/dQP ) that seamlessly integrates with over 15 state-of-the-art solvers. Comprehensive benchmark experiments demonstrate dQP's robustness and scalability, particularly highlighting its advantages in large-scale sparse problems.
Related papers
- A Penalty Approach for Differentiation Through Black-Box Quadratic Programming Solvers [29.582959310549594]
dXPP is a penalty-based differentiation framework that decouples QP solving from differentiation.<n>We evaluate dXPP on various tasks, including randomly generated QPs, large-scale sparse projection problems, and a real-world portfolio optimization task.
arXiv Detail & Related papers (2026-02-15T14:05:36Z) - An Expansion-Based Approach for Quantified Integer Programming [0.0]
Quantified Programming (QIP) bridges multiple domains by extending Quantified Boolean Formulas (QBF)<n>QIP provides a versatile framework for addressing complex decision-making scenarios.<n>We introduce an expansion-based approach for QIP using Counterexample-Guided Abstraction Refinement (CEGAR)
arXiv Detail & Related papers (2025-06-04T21:14:14Z) - Efficient QAOA Architecture for Solving Multi-Constrained Optimization Problems [3.757262277494307]
This paper proposes a novel combination of constraint encoding methods for the Quantum Approximate Optimization Ansatz.<n>One-hot constraints are enforced through $XY$-mixers that restrict the search space to the feasible sub-space naturally.<n>Since $XY$-mixers restrict the search space, specific state vector entries are always zero and can be omitted from the simulation, saving valuable memory and computing resources.
arXiv Detail & Related papers (2025-06-03T17:46:53Z) - Single-loop Algorithms for Stochastic Non-convex Optimization with Weakly-Convex Constraints [49.76332265680669]
This paper examines a crucial subset of problems where both the objective and constraint functions are weakly convex.<n>Existing methods often face limitations, including slow convergence rates or reliance on double-loop designs.<n>We introduce a novel single-loop penalty-based algorithm to overcome these challenges.
arXiv Detail & Related papers (2025-04-21T17:15:48Z) - NeuralQP: A General Hypergraph-based Optimization Framework for Large-scale QCQPs [8.503330120957052]
This paper introduces NeuralQP, a general hypergraph-based framework for large-scale Quadratically Constrained Quadratic Programs (QCQPs)
We prove that our framework UniEGNN with our hypergraph representation is equivalent to the Interior-Point Method (IPM) for quadratic programming.
Experiments on two benchmark problems and large-scale real-world instances from QPLIB demonstrate that NeuralQP outperforms state-of-the-art solvers.
arXiv Detail & Related papers (2024-09-28T10:42:47Z) - Improving the trainability of VQE on NISQ computers for solving portfolio optimization using convex interpolation [8.186804065389007]
We improve the trainability of variational quantum eigensolver (VQE) by utilizing convexarity to solve portfolio optimization.<n>Based on convex, the location of the ground state can be evaluated by learning the property of a small subset of basis states in the Hilbert space.<n>The successfully implementation of a $40$-qubit experiment using only $10$ superconducting qubits demonstrates the effectiveness of our proposals.
arXiv Detail & Related papers (2024-07-08T03:51:54Z) - WANCO: Weak Adversarial Networks for Constrained Optimization problems [5.257895611010853]
We first transform minimax problems into minimax problems using the augmented Lagrangian method.
We then use two (or several) deep neural networks to represent the primal and dual variables respectively.
The parameters in the neural networks are then trained by an adversarial process.
arXiv Detail & Related papers (2024-07-04T05:37:48Z) - Learning Solution-Aware Transformers for Efficiently Solving Quadratic Assignment Problem [27.33966993065248]
This work focuses on learning-based solutions for efficiently solving the Quadratic Assignment Problem (QAPs)
Current research on QAPs suffer from limited scale and inefficiency.
We propose the first solution of its kind for QAP in the learn-to-improve category.
arXiv Detail & Related papers (2024-06-14T10:15:03Z) - Optimized QUBO formulation methods for quantum computing [0.4999814847776097]
We show how to apply our techniques in case of an NP-hard optimization problem inspired by a real-world financial scenario.
We follow by submitting instances of this problem to two D-wave quantum annealers, comparing the performances of our novel approach with the standard methods used in these scenarios.
arXiv Detail & Related papers (2024-06-11T19:59:05Z) - Pointer Networks with Q-Learning for Combinatorial Optimization [55.2480439325792]
We introduce the Pointer Q-Network (PQN), a hybrid neural architecture that integrates model-free Q-value policy approximation with Pointer Networks (Ptr-Nets)
Our empirical results demonstrate the efficacy of this approach, also testing the model in unstable environments.
arXiv Detail & Related papers (2023-11-05T12:03:58Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Learning differentiable solvers for systems with hard constraints [48.54197776363251]
We introduce a practical method to enforce partial differential equation (PDE) constraints for functions defined by neural networks (NNs)
We develop a differentiable PDE-constrained layer that can be incorporated into any NN architecture.
Our results show that incorporating hard constraints directly into the NN architecture achieves much lower test error when compared to training on an unconstrained objective.
arXiv Detail & Related papers (2022-07-18T15:11:43Z) - A Constrained Optimization Approach to Bilevel Optimization with
Multiple Inner Minima [49.320758794766185]
We propose a new approach, which convert the bilevel problem to an equivalent constrained optimization, and then the primal-dual algorithm can be used to solve the problem.
Such an approach enjoys a few advantages including (a) addresses the multiple inner minima challenge; (b) fully first-order efficiency without Jacobian computations.
arXiv Detail & Related papers (2022-03-01T18:20:01Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Efficient differentiable quadratic programming layers: an ADMM approach [0.0]
We present an alternative network layer architecture based on the alternating direction method of multipliers (ADMM)
Backward differentiation is performed by implicit differentiation of the residual map of a modified fixed-point iteration.
Simulated results demonstrate the computational advantage of the ADMM layer, which for medium scaled problems is approximately an order of magnitude faster than the OptNet quadratic programming layer.
arXiv Detail & Related papers (2021-12-14T15:25:07Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - Physics and Equality Constrained Artificial Neural Networks: Application
to Partial Differential Equations [1.370633147306388]
Physics-informed neural networks (PINNs) have been proposed to learn the solution of partial differential equations (PDE)
Here, we show that this specific way of formulating the objective function is the source of severe limitations in the PINN approach.
We propose a versatile framework that can tackle both inverse and forward problems.
arXiv Detail & Related papers (2021-09-30T05:55:35Z) - Efficient and Modular Implicit Differentiation [68.74748174316989]
We propose a unified, efficient and modular approach for implicit differentiation of optimization problems.
We show that seemingly simple principles allow to recover many recently proposed implicit differentiation methods and create new ones easily.
arXiv Detail & Related papers (2021-05-31T17:45:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.