Unrolled Neural Networks for Constrained Optimization
- URL: http://arxiv.org/abs/2601.17274v1
- Date: Sat, 24 Jan 2026 03:12:41 GMT
- Title: Unrolled Neural Networks for Constrained Optimization
- Authors: Samar Hadou, Alejandro Ribeiro,
- Abstract summary: Our framework comprises two coupled neural networks that jointly approximate the saddle point of the Lagrangian.<n>We numerically evaluate the framework on mixed-integer quadratic programs and power allocation in wireless networks.
- Score: 83.29547301151177
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we develop unrolled neural networks to solve constrained optimization problems, offering accelerated, learnable counterparts to dual ascent (DA) algorithms. Our framework, termed constrained dual unrolling (CDU), comprises two coupled neural networks that jointly approximate the saddle point of the Lagrangian. The primal network emulates an iterative optimizer that finds a stationary point of the Lagrangian for a given dual multiplier, sampled from an unknown distribution. The dual network generates trajectories towards the optimal multipliers across its layers while querying the primal network at each layer. Departing from standard unrolling, we induce DA dynamics by imposing primal-descent and dual-ascent constraints through constrained learning. We formulate training the two networks as a nested optimization problem and propose an alternating procedure that updates the primal and dual networks in turn, mitigating uncertainty in the multiplier distribution required for primal network training. We numerically evaluate the framework on mixed-integer quadratic programs (MIQPs) and power allocation in wireless networks. In both cases, our approach yields near-optimal near-feasible solutions and exhibits strong out-of-distribution (OOD) generalization.
Related papers
- Nonlinear Optimization with GPU-Accelerated Neural Network Constraints [0.0]
We treat the neural network as a "gray box" where intermediate variables and constraints are not exposed to the optimization solver.<n>Compared to the full-space formulation, the reduced-space formulation leads to faster solves and fewer iterations in an interior point method.
arXiv Detail & Related papers (2025-09-26T15:13:46Z) - Unrolled Graph Neural Networks for Constrained Optimization [83.29547301151177]
We study the dynamics of the dual ascent algorithm in two coupled graph neural networks (GNNs)<n>We propose a joint training scheme that alternates between updating the primal and dual networks.<n>Our numerical experiments demonstrate that our approach yields near-optimal near-feasible solutions.
arXiv Detail & Related papers (2025-09-21T16:55:41Z) - Fast State-Augmented Learning for Wireless Resource Allocation with Dual Variable Regression [83.27791109672927]
We show how a state-augmented graph neural network (GNN) parametrization for the resource allocation policy circumvents the drawbacks of the ubiquitous dual subgradient methods.<n>Lagrangian maximizing state-augmented policies are learned during the offline training phase.<n>We prove a convergence result and an exponential probability bound on the excursions of the dual function (iterate) optimality gaps.
arXiv Detail & Related papers (2025-06-23T15:20:58Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Universal Neural Optimal Transport [0.0]
UNOT (Universal Neural Optimal Transport) is a novel framework capable of accurately predicting (entropic) OT distances and plans between discrete measures for a given cost function.<n>We show that our network can be used as a state-of-the-art initialization for the Sinkhorn algorithm with speedups of up to $7.4times$.
arXiv Detail & Related papers (2022-11-30T21:56:09Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.