Zeroth and First Order Stochastic Frank-Wolfe Algorithms for Constrained
Optimization
- URL: http://arxiv.org/abs/2107.06534v1
- Date: Wed, 14 Jul 2021 08:01:30 GMT
- Title: Zeroth and First Order Stochastic Frank-Wolfe Algorithms for Constrained
Optimization
- Authors: Zeeshan Akhtar, and Ketan Rajawat
- Abstract summary: Problems of convex optimization with two sets of constraints arise frequently in the context of semidefinite programming.
Since projection onto the first set of constraints is difficult, it becomes necessary to explore projection-free algorithms.
The efficacy of the proposed algorithms is tested on relevant applications of sparse matrix estimation, clustering via semidefinite relaxation, and uniform sparsest cut problem.
- Score: 13.170519806372075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers stochastic convex optimization problems with two sets of
constraints: (a) deterministic constraints on the domain of the optimization
variable, which are difficult to project onto; and (b) deterministic or
stochastic constraints that admit efficient projection. Problems of this form
arise frequently in the context of semidefinite programming as well as when
various NP-hard problems are solved approximately via semidefinite relaxation.
Since projection onto the first set of constraints is difficult, it becomes
necessary to explore projection-free algorithms, such as the stochastic
Frank-Wolfe (FW) algorithm. On the other hand, the second set of constraints
cannot be handled in the same way, and must be incorporated as an indicator
function within the objective function, thereby complicating the application of
FW methods. Similar problems have been studied before, and solved using
first-order stochastic FW algorithms by applying homotopy and Nesterov's
smoothing techniques to the indicator function. This work improves upon these
existing results and puts forth momentum-based first-order methods that yield
improved convergence rates, at par with the best known rates for problems
without the second set of constraints. Zeroth-order variants of the proposed
algorithms are also developed and again improve upon the state-of-the-art rate
results. The efficacy of the proposed algorithms is tested on relevant
applications of sparse matrix estimation, clustering via semidefinite
relaxation, and uniform sparsest cut problem.
Related papers
- Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems [12.29270365918848]
The proposed algorithm is based on the subject-point unique constraints from other interior-point methods.
It is shown that with a careful balance between the projection, step-size and sequence sequences, the proposed algorithm convergence guarantees in both numerical and deterministic settings.
arXiv Detail & Related papers (2023-04-28T15:30:43Z) - Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features [65.64276393443346]
The Frank-Wolfe (FW) method is a popular approach for solving optimization problems with structured constraints.
We present two new variants of the algorithms for minimization of the finite-sum gradient.
arXiv Detail & Related papers (2023-04-23T20:05:09Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - A Sequential Deep Learning Algorithm for Sampled Mixed-integer
Optimisation Problems [0.3867363075280544]
We introduce and analyse two efficient algorithms for mixed-integer optimisation problems.
We show that both algorithms exhibit finite-time convergence towards the optimal solution.
We establish quantitatively the efficacy of these algorithms by means of three numerical tests.
arXiv Detail & Related papers (2023-01-25T17:10:52Z) - A Sequential Quadratic Programming Method with High Probability Complexity Bounds for Nonlinear Equality Constrained Stochastic Optimization [2.3814052021083354]
It is assumed that constraint function values and derivatives are available, but only programming approximations of the objective function and its associated derivatives can be computed.
A high-probability bound on the iteration complexity of the algorithm to approximate first-order stationarity is derived.
arXiv Detail & Related papers (2023-01-01T21:46:50Z) - A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear
Equality Constrained Optimization with Rank-Deficient Jacobians [11.03311584463036]
A sequential quadratic optimization algorithm is proposed for solving smooth nonlinear equality constrained optimization problems.
Results of numerical experiments demonstrate that the algorithm offers superior performance when compared to popular alternatives.
arXiv Detail & Related papers (2021-06-24T13:46:52Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Conservative Stochastic Optimization with Expectation Constraints [11.393603788068777]
This paper considers convex optimization problems where the objective and constraint functions involve expectations with respect to the data indices or environmental variables.
Online and efficient approaches for solving such problems have not been widely studied.
We propose a novel conservative optimization algorithm (CSOA) that achieves zero constraint violation and $Oleft(T-frac12right)$ optimality gap.
arXiv Detail & Related papers (2020-08-13T08:56:24Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.