Revisiting Frank-Wolfe for Polytopes: Strict Complementarity and
Sparsity
- URL: http://arxiv.org/abs/2006.00558v4
- Date: Wed, 6 Jan 2021 20:33:49 GMT
- Title: Revisiting Frank-Wolfe for Polytopes: Strict Complementarity and
Sparsity
- Authors: Dan Garber
- Abstract summary: We prove that the Frank-Wolfe algorithm converges linearly with rate that depends explicitly only on the dimension of the optimal face.
We motivate strict complementarity by proving that it implies sparsity-robustness of optimal solutions to noise.
- Score: 19.24470467199451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years it was proved that simple modifications of the classical
Frank-Wolfe algorithm (aka conditional gradient algorithm) for smooth convex
minimization over convex and compact polytopes, converge with linear rate,
assuming the objective function has the quadratic growth property. However, the
rate of these methods depends explicitly on the dimension of the problem which
cannot explain their empirical success for large scale problems. In this paper
we first demonstrate that already for very simple problems and even when the
optimal solution lies on a low-dimensional face of the polytope, such
dependence on the dimension cannot be avoided in worst case. We then revisit
the addition of a strict complementarity assumption already considered in
Wolfe's classical book \cite{Wolfe1970}, and prove that under this condition,
the Frank-Wolfe method with away-steps and line-search converges linearly with
rate that depends explicitly only on the dimension of the optimal face. We
motivate strict complementarity by proving that it implies sparsity-robustness
of optimal solutions to noise.
Related papers
- Revisiting Convergence: Shuffling Complexity Beyond Lipschitz Smoothness [50.78508362183774]
Shuffling-type gradient methods are favored in practice for their simplicity and rapid empirical performance.<n>Most require the Lipschitz condition, which is often not met in common machine learning schemes.
arXiv Detail & Related papers (2025-07-11T15:36:48Z) - Entropic Mirror Descent for Linear Systems: Polyak's Stepsize and Implicit Bias [55.72269695392027]
This paper focuses on applying entropic mirror descent to solve linear systems.<n>The main challenge for the convergence analysis stems from the unboundedness of the domain.<n>To overcome this without imposing restrictive assumptions, we introduce a variant of Polyak-type stepsizes.
arXiv Detail & Related papers (2025-05-05T12:33:18Z) - Revisiting Frank-Wolfe for Structured Nonconvex Optimization [33.44652927142219]
We introduce a new projection (Frank-Wolfe) method for optimizing structured non functions that are expressed as a difference of two convex functions.
We prove that the proposed method achieves in $O-(O-)$(O-)$(O-)$(O-)$(O-)$(O-)$(O-)$(O-)$(O-)$(O-)$(O-)$(O-)$(
arXiv Detail & Related papers (2025-03-11T22:09:44Z) - Performative Reinforcement Learning with Linear Markov Decision Process [14.75815792682734]
We study the setting of emphperformative reinforcement learning where the deployed policy affects both the reward and the transition of the underlying Markov decision process.
We generalize the results to emphlinear Markov decision processes which is the primary theoretical model of large-scale MDPs.
arXiv Detail & Related papers (2024-11-07T23:04:48Z) - Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity [50.25258834153574]
We focus on the class of (strongly) convex $(L0)$-smooth functions and derive new convergence guarantees for several existing methods.
In particular, we derive improved convergence rates for Gradient Descent with smoothnessed Gradient Clipping and for Gradient Descent with Polyak Stepsizes.
arXiv Detail & Related papers (2024-09-23T13:11:37Z) - Directional Smoothness and Gradient Methods: Convergence and Adaptivity [16.779513676120096]
We develop new sub-optimality bounds for gradient descent that depend on the conditioning of the objective along the path of optimization.
Key to our proofs is directional smoothness, a measure of gradient variation that we use to develop upper-bounds on the objective.
We prove that the Polyak step-size and normalized GD obtain fast, path-dependent rates despite using no knowledge of the directional smoothness.
arXiv Detail & Related papers (2024-03-06T22:24:05Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features [65.64276393443346]
The Frank-Wolfe (FW) method is a popular approach for solving optimization problems with structured constraints.
We present two new variants of the algorithms for minimization of the finite-sum gradient.
arXiv Detail & Related papers (2023-04-23T20:05:09Z) - Adaptive Approximate Implicitization of Planar Parametric Curves via
Weak Gradient Constraints [11.559302398966468]
We introduce a new regularization constraint for both and non-polynomial curves.
We then propose two adaptive algorithms of approximate implicitization for both and non-polynomial curves.
More precisely, the idea is gradually increasing the implicit degree, until there is no obvious improvement in the weak gradient loss of the outputs.
arXiv Detail & Related papers (2023-02-23T04:08:53Z) - High-Probability Bounds for Stochastic Optimization and Variational
Inequalities: the Case of Unbounded Variance [59.211456992422136]
We propose algorithms with high-probability convergence results under less restrictive assumptions.
These results justify the usage of the considered methods for solving problems that do not fit standard functional classes in optimization.
arXiv Detail & Related papers (2023-02-02T10:37:23Z) - Randomized Coordinate Subgradient Method for Nonsmooth Composite
Optimization [11.017632675093628]
Coordinate-type subgradient methods for addressing nonsmooth problems are relatively underexplored due to the set of properties of the Lipschitz-type assumption.
arXiv Detail & Related papers (2022-06-30T02:17:11Z) - Efficient Algorithms for High-Dimensional Convex Subspace Optimization
via Strict Complementarity [19.24470467199451]
We consider optimization problems in which the goal is find a $k$ subspace of $realsn$, $k$, which minimizes a convex smooth loss.
While this problem is highly in high-dimensional regimes, it is difficult to find a global optimal solution.
In this paper we present a natural.
determinate optimal dimension relaxation for which convergence to the.
global is straightforward.
arXiv Detail & Related papers (2022-02-08T17:36:43Z) - On the Convergence Rates of Policy Gradient Methods [9.74841674275568]
We consider geometrically discounted dominance problems with finite state sub spaces.
We show that with direct gradient pararization in a sample we can analyze the general complexity of a gradient.
arXiv Detail & Related papers (2022-01-19T07:03:37Z) - A Feasible Level Proximal Point Method for Nonconvex Sparse Constrained
Optimization [25.73397307080647]
We present a new model of a general convex or non objective machine machine objectives.
We propose an algorithm that solves a constraint with gradually relaxed point levels of each subproblem.
We demonstrate the effectiveness of our new numerical scale problems.
arXiv Detail & Related papers (2020-10-23T05:24:05Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Gradient-Free Methods for Saddle-Point Problem [125.99533416395765]
We generalize the approach Gasnikov et. al., 2017, which allows to solve (stochastic) convex optimization problems with an inexact gradient-free oracle.
Our approach reduces $fracnlog n$ times the required number of oracle calls.
In the second part of the paper, we analyze the case when such an assumption cannot be made, we propose a general approach on how to modernize the method to solve this problem.
arXiv Detail & Related papers (2020-05-12T16:44:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.