A Neural Network Framework for Discovering Closed-form Solutions to Quadratic Programs with Linear Constraints
- URL: http://arxiv.org/abs/2510.23737v1
- Date: Mon, 27 Oct 2025 18:11:44 GMT
- Title: A Neural Network Framework for Discovering Closed-form Solutions to Quadratic Programs with Linear Constraints
- Authors: Fuat Can Beylunioglu, P. Robert Duimering, Mehrdad Pirnia,
- Abstract summary: Deep neural networks (DNNs) have been used to model complex optimization problems in many applications.<n>They have difficulty guaranteeing solution optimality and feasibility, despite training on large datasets.<n>This paper proposes a NN modeling approach and learning algorithm that discovers the exact closed-form solution to a quadratic program with linear constraints.
- Score: 2.064612766965483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have been used to model complex optimization problems in many applications, yet have difficulty guaranteeing solution optimality and feasibility, despite training on large datasets. Training a NN as a surrogate optimization solver amounts to estimating a global solution function that maps varying problem input parameters to the corresponding optimal solutions. Work in multiparametric programming (mp) has shown that solutions to quadratic programs (QP) are piece-wise linear functions of the parameters, and researchers have suggested leveraging this property to model mp-QP using NN with ReLU activation functions, which also exhibit piecewise linear behaviour. This paper proposes a NN modeling approach and learning algorithm that discovers the exact closed-form solution to QP with linear constraints, by analytically deriving NN model parameters directly from the problem coefficients without training. Whereas generic DNN cannot guarantee accuracy outside the training distribution, the closed-form NN model produces exact solutions for every discovered critical region of the solution function. To evaluate the closed-form NN model, it was applied to DC optimal power flow problems in electricity management. In terms of Karush-Kuhn-Tucker (KKT) optimality and feasibility of solutions, it outperformed a classically trained DNN and was competitive with, or outperformed, a commercial analytic solver (Gurobi) at far less computational cost. For a long-range energy planning problem, it was able to produce optimal and feasible solutions for millions of input parameters within seconds.
Related papers
- Partially-Supervised Neural Network Model For Quadratic Multiparametric Programming [2.765106384328772]
This study proposes a partially-supervised NN architecture that directly represents the mathematical structure of the global solution function.<n>In contrast to generic NN training approaches, the proposed PSNN method derives a large proportion of model weights directly from the mathematical properties of the optimization problem.
arXiv Detail & Related papers (2025-06-05T20:26:18Z) - Towards graph neural networks for provably solving convex optimization problems [5.966097889241178]
We propose an iterative MPNN framework to solve convex optimization problems with provable feasibility guarantees.<n> Experimental results show that our approach outperforms existing neural baselines in solution quality and feasibility.
arXiv Detail & Related papers (2025-02-04T16:11:41Z) - A Guaranteed-Stable Neural Network Approach for Optimal Control of Nonlinear Systems [3.5000297213981653]
A promising approach to optimal control of nonlinear systems involves iteratively linearizing the system and solving an optimization problem at each time instant to determine the optimal control input.<n>Since this approach relies on online optimization, it can be computationally expensive, and thus unrealistic for systems with limited computing resources.<n>One potential solution to this issue is to incorporate a Neural Network (NN) into the control loop.
arXiv Detail & Related papers (2025-01-28T22:55:47Z) - ILP-based Resource Optimization Realized by Quantum Annealing for Optical Wide-area Communication Networks -- A Framework for Solving Combinatorial Problems of a Real-world Application by Quantum Annealing [5.924780594614675]
In recent works we demonstrated how such a problem could be cast as a quadratic unconstrained binary optimization (QUBO) problem that can be embedded onto the D-Wave AdvantageTM quantum annealer system.
Here we report on our investigations for optimizing system parameters, and how we incorporate machine learning (ML) techniques to further improve on the quality of solutions.
We successfully implement this NN in a simple integer linear programming (ILP) example, demonstrating how the NN can fully map out the solution space that was not captured by D-Wave.
arXiv Detail & Related papers (2024-01-01T17:52:58Z) - Improved Training of Physics-Informed Neural Networks with Model
Ensembles [81.38804205212425]
We propose to expand the solution interval gradually to make the PINN converge to the correct solution.
All ensemble members converge to the same solution in the vicinity of observed data.
We show experimentally that the proposed method can improve the accuracy of the found solution.
arXiv Detail & Related papers (2022-04-11T14:05:34Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z) - Optimizing Wireless Systems Using Unsupervised and
Reinforced-Unsupervised Deep Learning [96.01176486957226]
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems.
In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems.
arXiv Detail & Related papers (2020-01-03T11:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.