Deep Distributed Optimization for Large-Scale Quadratic Programming
- URL: http://arxiv.org/abs/2412.12156v2
- Date: Sat, 25 Jan 2025 20:47:23 GMT
- Title: Deep Distributed Optimization for Large-Scale Quadratic Programming
- Authors: Augustinos D. Saravanos, Hunter Kuperman, Alex Oshin, Arshiya Taj Abdul, Vincent Pacelli, Evangelos A. Theodorou,
- Abstract summary: This paper introduces a novel deep learning-aided distributed optimization architecture designed for tackling large-scale Quadratic programming (QP) problems.
DeepDistributedQP demonstrates strong generalization by training on small problems and scaling to solve much larger ones (up to 50K variables and 150K constraints) using the same policy.
- Score: 15.773581194857085
- License:
- Abstract: Quadratic programming (QP) forms a crucial foundation in optimization, encompassing a broad spectrum of domains and serving as the basis for more advanced algorithms. Consequently, as the scale and complexity of modern applications continue to grow, the development of efficient and reliable QP algorithms is becoming increasingly vital. In this context, this paper introduces a novel deep learning-aided distributed optimization architecture designed for tackling large-scale QP problems. First, we combine the state-of-the-art Operator Splitting QP (OSQP) method with a consensus approach to derive DistributedQP, a new method tailored for network-structured problems, with convergence guarantees to optimality. Subsequently, we unfold this optimizer into a deep learning framework, leading to DeepDistributedQP, which leverages learned policies to accelerate reaching to desired accuracy within a restricted amount of iterations. Our approach is also theoretically grounded through Probably Approximately Correct (PAC)-Bayes theory, providing generalization bounds on the expected optimality gap for unseen problems. The proposed framework, as well as its centralized version DeepQP, significantly outperform their standard optimization counterparts on a variety of tasks such as randomly generated problems, optimal control, linear regression, transportation networks and others. Notably, DeepDistributedQP demonstrates strong generalization by training on small problems and scaling to solve much larger ones (up to 50K variables and 150K constraints) using the same policy. Moreover, it achieves orders-of-magnitude improvements in wall-clock time compared to OSQP. The certifiable performance guarantees of our approach are also demonstrated, ensuring higher-quality solutions over traditional optimizers.
Related papers
- A Quantum Genetic Algorithm Framework for the MaxCut Problem [49.59986385400411]
The proposed method introduces a Quantum Genetic Algorithm (QGA) using a Grover-based evolutionary framework and divide-and-conquer principles.
On complete graphs, the proposed method consistently achieves the true optimal MaxCut values, outperforming the Semidefinite Programming (SDP) approach.
On ErdHos-R'enyi random graphs, the QGA demonstrates competitive performance, achieving median solutions within 92-96% of the SDP results.
arXiv Detail & Related papers (2025-01-02T05:06:16Z) - Efficient and Scalable Deep Reinforcement Learning for Mean Field Control Games [16.62770187749295]
Mean Field Control Games (MFCGs) provide a powerful theoretical framework for analyzing systems of infinitely many interacting agents.
This paper presents a scalable deep Reinforcement Learning (RL) approach to approximate equilibrium solutions of MFCGs.
arXiv Detail & Related papers (2024-12-28T02:04:53Z) - Integrating Optimization Theory with Deep Learning for Wireless Network Design [38.257335693563554]
Traditional wireless network design relies on optimization algorithms derived from domain-specific mathematical models.
Deep learning has emerged as a promising alternative to overcome complexity and adaptability concerns.
This paper introduces a novel approach that integrates optimization theory with deep learning methodologies to address these issues.
arXiv Detail & Related papers (2024-12-11T20:27:48Z) - Differentiation Through Black-Box Quadratic Programming Solvers [16.543673072027136]
We introduce dQP, a modular framework that enables plug-and-play differentiation for any quadratic programming (QP) solver.
Our solution is based on the core theoretical insight that knowledge of the active constraint set at the QP optimum allows for explicit differentiation.
Our implementation, which will be made publicly available, interfaces with an existing framework that supports over 15 state-of-the-art QP solvers.
arXiv Detail & Related papers (2024-10-08T20:01:39Z) - NeuralQP: A General Hypergraph-based Optimization Framework for Large-scale QCQPs [8.503330120957052]
This paper introduces NeuralQP, a general hypergraph-based framework for large-scale Quadratically Constrained Quadratic Programs (QCQPs)
We prove that our framework UniEGNN with our hypergraph representation is equivalent to the Interior-Point Method (IPM) for quadratic programming.
Experiments on two benchmark problems and large-scale real-world instances from QPLIB demonstrate that NeuralQP outperforms state-of-the-art solvers.
arXiv Detail & Related papers (2024-09-28T10:42:47Z) - Distributed Quantum Approximate Optimization Algorithm on Integrated High-Performance Computing and Quantum Computing Systems for Large-Scale Optimization [1.7099366779394252]
Quantum approximated optimization algorithm (QAOA) has shown promise for solving optimization problems by providing quantum speedup on gate-based quantum computing systems.
We propose a distributed QAOA (DQAOA), which leverages a high-performance computing-quantum computing integrated system.
We successfully optimize photonic structures using AL-DQAOA, indicating that solving real-world optimization problems using gate-based quantum computing is feasible with our strategies.
arXiv Detail & Related papers (2024-07-29T17:42:25Z) - Lower Bounds and Optimal Algorithms for Non-Smooth Convex Decentralized Optimization over Time-Varying Networks [57.24087627267086]
We consider the task of minimizing the sum of convex functions stored in a decentralized manner across the nodes of a communication network.
Lower bounds on the number of decentralized communications and (sub)gradient computations required to solve the problem have been established.
We develop the first optimal algorithm that matches these lower bounds and offers substantially improved theoretical performance compared to the existing state of the art.
arXiv Detail & Related papers (2024-05-28T10:28:45Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Federated Multi-Level Optimization over Decentralized Networks [55.776919718214224]
We study the problem of distributed multi-level optimization over a network, where agents can only communicate with their immediate neighbors.
We propose a novel gossip-based distributed multi-level optimization algorithm that enables networked agents to solve optimization problems at different levels in a single timescale.
Our algorithm achieves optimal sample complexity, scaling linearly with the network size, and demonstrates state-of-the-art performance on various applications.
arXiv Detail & Related papers (2023-10-10T00:21:10Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.