STRIDE along Spectrahedral Vertices for Solving Large-Scale Rank-One
Semidefinite Relaxations
- URL: http://arxiv.org/abs/2105.14033v1
- Date: Fri, 28 May 2021 18:07:16 GMT
- Title: STRIDE along Spectrahedral Vertices for Solving Large-Scale Rank-One
Semidefinite Relaxations
- Authors: Heng Yang, Ling Liang, Kim-Chuan Toh, Luca Carlone
- Abstract summary: We consider solving high-order semidefinite programming relaxations of nonconstrained optimization problems (POPs)
Existing approaches, which solve the SDP independently from the POP, either cannot scale to large problems or suffer from slow convergence due to the typical uneneracy of such SDPs.
We propose a new algorithmic framework called SpecTrahedral vErtices (STRIDE)
- Score: 27.353023427198806
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider solving high-order semidefinite programming (SDP) relaxations of
nonconvex polynomial optimization problems (POPs) that admit rank-one optimal
solutions. Existing approaches, which solve the SDP independently from the POP,
either cannot scale to large problems or suffer from slow convergence due to
the typical degeneracy of such SDPs. We propose a new algorithmic framework,
called SpecTrahedral pRoximal gradIent Descent along vErtices (STRIDE), that
blends fast local search on the nonconvex POP with global descent on the convex
SDP. Specifically, STRIDE follows a globally convergent trajectory driven by a
proximal gradient method (PGM) for solving the SDP, while simultaneously
probing long, but safeguarded, rank-one "strides", generated by fast nonlinear
programming algorithms on the POP, to seek rapid descent. We prove STRIDE has
global convergence. To solve the subproblem of projecting a given point onto
the feasible set of the SDP, we reformulate the projection step as a
continuously differentiable unconstrained optimization and apply a
limited-memory BFGS method to achieve both scalability and accuracy. We conduct
numerical experiments on solving second-order SDP relaxations arising from two
important applications in machine learning and computer vision. STRIDE
dominates a diverse set of five existing SDP solvers and is the only solver
that can solve degenerate rank-one SDPs to high accuracy (e.g., KKT residuals
below 1e-9), even in the presence of millions of equality constraints.
Related papers
- Fast, Scalable, Warm-Start Semidefinite Programming with Spectral
Bundling and Sketching [53.91395791840179]
We present Unified Spectral Bundling with Sketching (USBS), a provably correct, fast and scalable algorithm for solving massive SDPs.
USBS provides a 500x speed-up over the state-of-the-art scalable SDP solver on an instance with over 2 billion decision variables.
arXiv Detail & Related papers (2023-12-19T02:27:22Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - Efficient semidefinite bounds for multi-label discrete graphical models [6.226454551201676]
One of the main queries on such models is to identify the SDPWCSP Function on Cost of a Posteri (MAP) Networks.
We consider a traditional dualized constraint approach and a dedicated dedicated SDP/Monteiro style method based on row-by-row updates.
arXiv Detail & Related papers (2021-11-24T13:38:34Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Certifiable Outlier-Robust Geometric Perception: Exact Semidefinite
Relaxations and Scalable Global Optimization [29.738513596063946]
We propose the first general framework to design cert algorithms for robust geometric perception in the presence of outliers.
Our experiments demonstrate that our SDP relaxation is exact with up to outliers across applications.
arXiv Detail & Related papers (2021-09-07T21:42:16Z) - Momentum-inspired Low-Rank Coordinate Descent for Diagonally Constrained
SDPs [12.7944665592057]
We present a novel, practical, and provable approach for solving constrained semidefinite programming (SDP) problems at scale using accelerated non-trivial programming.
arXiv Detail & Related papers (2021-06-16T13:35:40Z) - Two-Stage Stochastic Optimization via Primal-Dual Decomposition and Deep
Unrolling [86.85697555068168]
Two-stage algorithmic optimization plays a critical role in various engineering and scientific applications.
There still lack efficient algorithms, especially when the long-term and short-term variables are coupled in the constraints.
We show that PDD-SSCA can achieve superior performance over existing solutions.
arXiv Detail & Related papers (2021-05-05T03:36:00Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Stochastic gradient algorithms from ODE splitting perspective [0.0]
We present a different view on optimization, which goes back to the splitting schemes for approximate solutions of ODE.
In this work, we provide a connection between descent approach and gradient first-order splitting scheme for ODE.
We consider the special case of splitting, which is inspired by machine learning applications and derive a new upper bound on the global splitting error for it.
arXiv Detail & Related papers (2020-04-19T22:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.