A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear
Equality Constrained Optimization with Rank-Deficient Jacobians
- URL: http://arxiv.org/abs/2106.13015v1
- Date: Thu, 24 Jun 2021 13:46:52 GMT
- Title: A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear
Equality Constrained Optimization with Rank-Deficient Jacobians
- Authors: Albert S. Berahas, Frank E. Curtis, Michael J. O'Neill, Daniel P.
Robinson
- Abstract summary: A sequential quadratic optimization algorithm is proposed for solving smooth nonlinear equality constrained optimization problems.
Results of numerical experiments demonstrate that the algorithm offers superior performance when compared to popular alternatives.
- Score: 11.03311584463036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A sequential quadratic optimization algorithm is proposed for solving smooth
nonlinear equality constrained optimization problems in which the objective
function is defined by an expectation of a stochastic function. The algorithmic
structure of the proposed method is based on a step decomposition strategy that
is known in the literature to be widely effective in practice, wherein each
search direction is computed as the sum of a normal step (toward linearized
feasibility) and a tangential step (toward objective decrease in the null space
of the constraint Jacobian). However, the proposed method is unique from others
in the literature in that it both allows the use of stochastic objective
gradient estimates and possesses convergence guarantees even in the setting in
which the constraint Jacobians may be rank deficient. The results of numerical
experiments demonstrate that the algorithm offers superior performance when
compared to popular alternatives.
Related papers
- Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems [12.29270365918848]
The proposed algorithm is based on the subject-point unique constraints from other interior-point methods.
It is shown that with a careful balance between the projection, step-size and sequence sequences, the proposed algorithm convergence guarantees in both numerical and deterministic settings.
arXiv Detail & Related papers (2023-04-28T15:30:43Z) - A Sequential Quadratic Programming Method with High Probability Complexity Bounds for Nonlinear Equality Constrained Stochastic Optimization [2.3814052021083354]
It is assumed that constraint function values and derivatives are available, but only programming approximations of the objective function and its associated derivatives can be computed.
A high-probability bound on the iteration complexity of the algorithm to approximate first-order stationarity is derived.
arXiv Detail & Related papers (2023-01-01T21:46:50Z) - Adaptive Stochastic Optimisation of Nonconvex Composite Objectives [2.1700203922407493]
We propose and analyse a family of generalised composite mirror descent algorithms.
With adaptive step sizes, the proposed algorithms converge without requiring prior knowledge of the problem.
We exploit the low-dimensional structure of the decision sets for high-dimensional problems.
arXiv Detail & Related papers (2022-11-21T18:31:43Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Inequality Constrained Stochastic Nonlinear Optimization via Active-Set
Sequential Quadratic Programming [17.9230793188835]
We study nonlinear optimization problems with objective and deterministic equality and inequality constraints.
We propose an active-set sequentialAdaptive programming algorithm, using a differentiable exact augmented Lagrangian as the merit function.
The algorithm adaptively selects the parameters of augmented Lagrangian and performs line search to decide the stepsize.
arXiv Detail & Related papers (2021-09-23T17:12:17Z) - On Constraints in First-Order Optimization: A View from Non-Smooth
Dynamical Systems [99.59934203759754]
We introduce a class of first-order methods for smooth constrained optimization.
Two distinctive features of our approach are that projections or optimizations over the entire feasible set are avoided.
The resulting algorithmic procedure is simple to implement even when constraints are nonlinear.
arXiv Detail & Related papers (2021-07-17T11:45:13Z) - Zeroth and First Order Stochastic Frank-Wolfe Algorithms for Constrained
Optimization [13.170519806372075]
Problems of convex optimization with two sets of constraints arise frequently in the context of semidefinite programming.
Since projection onto the first set of constraints is difficult, it becomes necessary to explore projection-free algorithms.
The efficacy of the proposed algorithms is tested on relevant applications of sparse matrix estimation, clustering via semidefinite relaxation, and uniform sparsest cut problem.
arXiv Detail & Related papers (2021-07-14T08:01:30Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.