Fast, Scalable, Warm-Start Semidefinite Programming with Spectral
Bundling and Sketching
- URL: http://arxiv.org/abs/2312.11801v2
- Date: Fri, 9 Feb 2024 21:01:56 GMT
- Title: Fast, Scalable, Warm-Start Semidefinite Programming with Spectral
Bundling and Sketching
- Authors: Rico Angell and Andrew McCallum
- Abstract summary: We present Unified Spectral Bundling with Sketching (USBS), a provably correct, fast and scalable algorithm for solving massive SDPs.
USBS provides a 500x speed-up over the state-of-the-art scalable SDP solver on an instance with over 2 billion decision variables.
- Score: 53.91395791840179
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While semidefinite programming (SDP) has traditionally been limited to
moderate-sized problems, recent algorithms augmented with matrix sketching
techniques have enabled solving larger SDPs. However, these methods achieve
scalability at the cost of an increase in the number of necessary iterations,
resulting in slower convergence as the problem size grows. Furthermore, they
require iteration-dependent parameter schedules that prohibit effective
utilization of warm-start initializations important in practical applications
with incrementally-arriving data or mixed-integer programming. We present
Unified Spectral Bundling with Sketching (USBS), a provably correct, fast and
scalable algorithm for solving massive SDPs that can leverage a warm-start
initialization to further accelerate convergence. Our proposed algorithm is a
spectral bundle method for solving general SDPs containing both equality and
inequality constraints. Moveover, when augmented with an optional matrix
sketching technique, our algorithm achieves the dramatically improved
scalability of previous work while sustaining convergence speed. We empirically
demonstrate the effectiveness of our method across multiple applications, with
and without warm-starting. For example, USBS provides a 500x speed-up over the
state-of-the-art scalable SDP solver on an instance with over 2 billion
decision variables.
Related papers
- Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Accelerated nonlinear primal-dual hybrid gradient algorithms with
applications to machine learning [0.0]
primal-dual hybrid gradient (PDHG) is a first-order method that splits convex optimization problems with saddle-point structure into smaller subproblems.
PDHG requires stepsize parameters fine-tuned for the problem at hand.
We introduce accelerated nonlinear variants of the PDHG algorithm that can achieve, for a broad class of optimization problems relevant to machine learning.
arXiv Detail & Related papers (2021-09-24T22:37:10Z) - Distributed stochastic optimization with large delays [59.95552973784946]
One of the most widely used methods for solving large-scale optimization problems is distributed asynchronous gradient descent (DASGD)
We show that DASGD converges to a global optimal implementation model under same delay assumptions.
arXiv Detail & Related papers (2021-07-06T21:59:49Z) - Momentum-inspired Low-Rank Coordinate Descent for Diagonally Constrained
SDPs [12.7944665592057]
We present a novel, practical, and provable approach for solving constrained semidefinite programming (SDP) problems at scale using accelerated non-trivial programming.
arXiv Detail & Related papers (2021-06-16T13:35:40Z) - STRIDE along Spectrahedral Vertices for Solving Large-Scale Rank-One
Semidefinite Relaxations [27.353023427198806]
We consider solving high-order semidefinite programming relaxations of nonconstrained optimization problems (POPs)
Existing approaches, which solve the SDP independently from the POP, either cannot scale to large problems or suffer from slow convergence due to the typical uneneracy of such SDPs.
We propose a new algorithmic framework called SpecTrahedral vErtices (STRIDE)
arXiv Detail & Related papers (2021-05-28T18:07:16Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.