Direct-Search for a Class of Stochastic Min-Max Problems
- URL: http://arxiv.org/abs/2102.11386v1
- Date: Mon, 22 Feb 2021 22:23:58 GMT
- Title: Direct-Search for a Class of Stochastic Min-Max Problems
- Authors: Sotiris Anagnostidis, Aurelien Lucchi, Youssef Diouane
- Abstract summary: We investigate the use of derivative-search methods that only access objective through oracle.
We prove convergence of this technique under mild assumptions.
Our analysis is the first one to address the convergence of a direct-search method for minmax objectives in a setting.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent applications in machine learning have renewed the interest of the
community in min-max optimization problems. While gradient-based optimization
methods are widely used to solve such problems, there are however many
scenarios where these techniques are not well-suited, or even not applicable
when the gradient is not accessible. We investigate the use of direct-search
methods that belong to a class of derivative-free techniques that only access
the objective function through an oracle. In this work, we design a novel
algorithm in the context of min-max saddle point games where one sequentially
updates the min and the max player. We prove convergence of this algorithm
under mild assumptions, where the objective of the max-player satisfies the
Polyak-\L{}ojasiewicz (PL) condition, while the min-player is characterized by
a nonconvex objective. Our method only assumes dynamically adjusted accurate
estimates of the oracle with a fixed probability. To the best of our knowledge,
our analysis is the first one to address the convergence of a direct-search
method for min-max objectives in a stochastic setting.
Related papers
- Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Riemannian stochastic optimization methods avoid strict saddle points [68.80251170757647]
We show that policies under study avoid strict saddle points / submanifolds with probability 1.
This result provides an important sanity check as it shows that, almost always, the limit state of an algorithm can only be a local minimizer.
arXiv Detail & Related papers (2023-11-04T11:12:24Z) - Smoothed Analysis of Sequential Probability Assignment [16.090378928208885]
We study information-theoretically optimal minmax rates as well as a framework for algorithmic reduction involving the maximum likelihood estimator oracle.
Our approach establishes a general-purpose reduction from minimax rates for sequential probability assignment for smoothed adversaries to minimax rates for transductive learning.
arXiv Detail & Related papers (2023-03-08T19:25:57Z) - Min-Max Optimization Made Simple: Approximating the Proximal Point
Method via Contraction Maps [77.8999425439444]
We present a first-order method that admits near-optimal convergence rates for convex/concave min-max problems.
Our work is based on the fact that the update rule of the Proximal Point method can be approximated up to accuracy.
arXiv Detail & Related papers (2023-01-10T12:18:47Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - A H\"olderian backtracking method for min-max and min-min problems [0.0]
We present a new algorithm to solve min-max or min-min problems out of the convex world.
We use rigidity assumptions, ubiquitous in learning, making our method applicable to many optimization problems.
arXiv Detail & Related papers (2020-07-17T08:12:31Z) - A Convergent and Dimension-Independent Min-Max Optimization Algorithm [32.492526162436405]
We show that a distribution which the min-player uses to update its parameters depends on a smooth and bounded nonnon-concave function.
Our algorithm converges to an approximate local equilibrium in a number of iterations that does not depend on the iterations.
arXiv Detail & Related papers (2020-06-22T16:11:30Z) - Non-convex Min-Max Optimization: Applications, Challenges, and Recent
Theoretical Advances [58.54078318403909]
The min-max problem, also known as the saddle point problem, is a class adversarial problem which is also studied in the context ofsum games.
arXiv Detail & Related papers (2020-06-15T05:33:42Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z) - Solving Non-Convex Non-Differentiable Min-Max Games using Proximal
Gradient Method [10.819408603463426]
Min-max saddle point descenders appear in a wide range of applications in machine leaning and signal processing.
We show that a simple multi-step LA-ascent algorithm is stronger than existing ones in the literature.
arXiv Detail & Related papers (2020-03-18T08:38:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.