Minimax Optimization with Smooth Algorithmic Adversaries
- URL: http://arxiv.org/abs/2106.01488v1
- Date: Wed, 2 Jun 2021 22:03:36 GMT
- Title: Minimax Optimization with Smooth Algorithmic Adversaries
- Authors: Tanner Fiez, Chi Jin, Praneeth Netrapalli, Lillian J. Ratliff
- Abstract summary: We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
- Score: 59.47122537182611
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers minimax optimization $\min_x \max_y f(x, y)$ in the
challenging setting where $f$ can be both nonconvex in $x$ and nonconcave in
$y$. Though such optimization problems arise in many machine learning paradigms
including training generative adversarial networks (GANs) and adversarially
robust models, many fundamental issues remain in theory, such as the absence of
efficiently computable optimality notions, and cyclic or diverging behavior of
existing algorithms. Our framework sprouts from the practical consideration
that under a computational budget, the max-player can not fully maximize
$f(x,\cdot)$ since nonconcave maximization is NP-hard in general. So, we
propose a new algorithm for the min-player to play against smooth algorithms
deployed by the adversary (i.e., the max-player) instead of against full
maximization. Our algorithm is guaranteed to make monotonic progress (thus
having no limit cycles), and to find an appropriate "stationary point" in a
polynomial number of iterations. Our framework covers practical settings where
the smooth algorithms deployed by the adversary are multi-step stochastic
gradient ascent, and its accelerated version. We further provide complementing
experiments that confirm our theoretical findings and demonstrate the
effectiveness of the proposed approach in practice.
Related papers
- Practical $0.385$-Approximation for Submodular Maximization Subject to a Cardinality Constraint [18.290666675596984]
Non-monotone constrained submodular subimation plays a crucial role in various machine learning applications.
Existing algorithms often struggle with a trade-off between approximation guarantees and practical efficiency.
We present a novel algorithm that combines a guarantee of $0.385$-approximation with a low and practical query complexity of $O(n+k2)$.
arXiv Detail & Related papers (2024-05-22T20:56:57Z) - Certified Multi-Fidelity Zeroth-Order Optimization [4.450536872346658]
We consider the problem of multi-fidelity zeroth-order optimization, where one can evaluate a function $f$ at various approximation levels.
We propose a certified variant of the MFDOO algorithm and derive a bound on its cost complexity for any Lipschitz function $f$.
We also prove an $f$-dependent lower bound showing that this algorithm has a near-optimal cost complexity.
arXiv Detail & Related papers (2023-08-02T07:20:37Z) - Target-based Surrogates for Stochastic Optimization [26.35752393302125]
We consider minimizing functions for which it is expensive to compute the (possibly) gradient.
Such functions are prevalent in computation reinforcement learning, imitation learning and adversarial training.
Our framework allows the use of standard optimization algorithms to construct surrogates which can be minimized efficiently.
arXiv Detail & Related papers (2023-02-06T08:08:34Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Adapting to Misspecification in Contextual Bandits [82.55565343668246]
We introduce a new family of oracle-efficient algorithms for $varepsilon$-misspecified contextual bandits.
We obtain the first algorithm that achieves the optimal $O(dsqrtT + varepsilonsqrtdT)$ regret bound for unknown misspecification level.
arXiv Detail & Related papers (2021-07-12T21:30:41Z) - Byzantine-Resilient Non-Convex Stochastic Gradient Descent [61.6382287971982]
adversary-resilient distributed optimization, in which.
machines can independently compute gradients, and cooperate.
Our algorithm is based on a new concentration technique, and its sample complexity.
It is very practical: it improves upon the performance of all prior methods when no.
setting machines are present.
arXiv Detail & Related papers (2020-12-28T17:19:32Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - Fast Adaptive Non-Monotone Submodular Maximization Subject to a Knapsack
Constraint [13.357957711519134]
Constrained submodular problems encompass a variety of applications, including personalized recommendation, team formation, and revenueimation via viral marketing.
We present a simple greedy algorithm that achieves a $5.83 randomized approximation and runs in $O(n log n)$ prohibitively time time i.e., at least factor $n$ faster than other state-of-the-art algorithms.
There, we obtain a 9-approximation, which is the first constant approximation for non-monotone$ objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data.
arXiv Detail & Related papers (2020-07-09T18:15:01Z) - Hybrid Variance-Reduced SGD Algorithms For Nonconvex-Concave Minimax
Problems [26.24895953952318]
We develop an algorithm to solve a class of non-gence minimax problems.
They can also work with both single or two mini-batch derivatives.
arXiv Detail & Related papers (2020-06-27T03:05:18Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.