Deep FlexQP: Accelerated Nonlinear Programming via Deep Unfolding
- URL: http://arxiv.org/abs/2512.01565v1
- Date: Mon, 01 Dec 2025 11:38:45 GMT
- Title: Deep FlexQP: Accelerated Nonlinear Programming via Deep Unfolding
- Authors: Alex Oshin, Rahul Vodeb Ghosh, Augustinos D. Saravanos, Evangelos A. Theodorou,
- Abstract summary: We propose an always-feasible quadratic programming (QP) solver, FlexQP, based on an exact relaxation of the QP constraints.<n>If the original constraints are feasible, then the solver finds the optimal solution to the original QP.<n>If the constraints are infeasible, then the solution minimizes the constraint violation in a sparse manner.
- Score: 16.300563027214555
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose an always-feasible quadratic programming (QP) optimizer, FlexQP, which is based on an exact relaxation of the QP constraints. If the original constraints are feasible, then the optimizer finds the optimal solution to the original QP. On the other hand, if the constraints are infeasible, the optimizer identifies a solution that minimizes the constraint violation in a sparse manner. FlexQP scales favorably with respect to the problem dimension, is robust to both feasible and infeasible QPs with minimal assumptions on the problem data, and can be effectively warm-started. We subsequently apply deep unfolding to improve our optimizer through data-driven techniques, leading to an accelerated Deep FlexQP. By learning dimension-agnostic feedback policies for the parameters from a small number of training examples, Deep FlexQP generalizes to problems with larger dimensions and can optimize for many more iterations than it was initially trained for. Our approach outperforms two recently proposed state-of-the-art accelerated QP approaches on a suite of benchmark systems including portfolio optimization, classification, and regression problems. We provide guarantees on the expected performance of our deep QP optimizer through probably approximately correct (PAC) Bayes generalization bounds. These certificates are used to design an accelerated sequential quadratic programming solver that solves nonlinear optimal control and predictive safety filter problems faster than traditional approaches. Overall, our approach is very robust and greatly outperforms existing non-learning and learning-based optimizers in terms of both runtime and convergence to the optimal solution across multiple classes of NLPs.
Related papers
- QAOA-Predictor: Forecasting Success Probabilities and Minimal Depths for Efficient Fixed-Parameter Optimization [0.9558392439655014]
We propose a novel approach using a Graph Neural Network (GNN) to predict Quantum Approximate Optimization Algorithm (QAOA) performance.<n>We demonstrate that the GNN accurately predicts QAOA performance within a 10% margin of the true values.
arXiv Detail & Related papers (2026-03-03T13:43:44Z) - Optimizing Optimizers for Fast Gradient-Based Learning [53.81268610971847]
We lay the theoretical foundation for automating design in gradient-based learning.<n>By treating gradient loss signals as a function that translates to parameter motions, the problem reduces to a family of convex optimization problems.
arXiv Detail & Related papers (2025-12-06T09:50:41Z) - Deep Distributed Optimization for Large-Scale Quadratic Programming [15.773581194857085]
This paper introduces a novel deep learning-aided distributed optimization architecture designed for tackling large-scale Quadratic programming (QP) problems.<n>DeepDistributedQP demonstrates strong generalization by training on small problems and scaling to solve much larger ones (up to 50K variables and 150K constraints) using the same policy.
arXiv Detail & Related papers (2024-12-11T09:45:00Z) - BPQP: A Differentiable Convex Optimization Framework for Efficient End-to-End Learning [17.662882719189373]
We introduce BPQP, a differentiable convex optimization framework for efficient end-to-end learning.<n>To enhance efficiency, we reformulate the backward pass as a simplified and decoupled quadratic programming problem.<n>Extensive experiments on both simulated and real-world datasets demonstrate that BPQP achieves a significant improvement in efficiency.
arXiv Detail & Related papers (2024-11-28T17:31:15Z) - Differentiation Through Black-Box Quadratic Programming Solvers [21.717766458737426]
Differentiable optimization has attracted significant research interest, particularly for quadratic programming (QP)<n>We introduce dQP, a modular and solver-agnostic framework for plug-and-play differentiation of virtually any QP solver.<n>Building on this result, we provide a minimal-overhead, open-source implementation that seamlessly integrates with over 15 state-of-the-art solvers.
arXiv Detail & Related papers (2024-10-08T20:01:39Z) - NeuralQP: A General Hypergraph-based Optimization Framework for Large-scale QCQPs [8.503330120957052]
This paper introduces NeuralQP, a general hypergraph-based framework for large-scale Quadratically Constrained Quadratic Programs (QCQPs)
We prove that our framework UniEGNN with our hypergraph representation is equivalent to the Interior-Point Method (IPM) for quadratic programming.
Experiments on two benchmark problems and large-scale real-world instances from QPLIB demonstrate that NeuralQP outperforms state-of-the-art solvers.
arXiv Detail & Related papers (2024-09-28T10:42:47Z) - Self-Supervised Learning of Iterative Solvers for Constrained Optimization [0.0]
We propose a learning-based iterative solver for constrained optimization.
It can obtain very fast and accurate solutions by customizing the solver to a specific parametric optimization problem.
A novel loss function based on the Karush-Kuhn-Tucker conditions of optimality is introduced, enabling fully self-supervised training of both neural networks.
arXiv Detail & Related papers (2024-09-12T14:17:23Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - On Constraints in First-Order Optimization: A View from Non-Smooth
Dynamical Systems [99.59934203759754]
We introduce a class of first-order methods for smooth constrained optimization.
Two distinctive features of our approach are that projections or optimizations over the entire feasible set are avoided.
The resulting algorithmic procedure is simple to implement even when constraints are nonlinear.
arXiv Detail & Related papers (2021-07-17T11:45:13Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.