Generative Minimization Networks: Training GANs Without Competition
- URL: http://arxiv.org/abs/2103.12685v1
- Date: Tue, 23 Mar 2021 17:01:08 GMT
- Title: Generative Minimization Networks: Training GANs Without Competition
- Authors: Paulina Grnarova, Yannic Kilcher, Kfir Y. Levy, Aurelien Lucchi,
Thomas Hofmann
- Abstract summary: Recent applications of generative models, particularly GANs, have triggered interest in solving min-max games for which standard optimization techniques are often not suitable.
We provide novel convergence guarantees on this objective and demonstrate why the obtained limit point solves the problem better than known techniques.
- Score: 34.808210988732405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many applications in machine learning can be framed as minimization problems
and solved efficiently using gradient-based techniques. However, recent
applications of generative models, particularly GANs, have triggered interest
in solving min-max games for which standard optimization techniques are often
not suitable. Among known problems experienced by practitioners is the lack of
convergence guarantees or convergence to a non-optimum cycle. At the heart of
these problems is the min-max structure of the GAN objective which creates
non-trivial dependencies between the players. We propose to address this
problem by optimizing a different objective that circumvents the min-max
structure using the notion of duality gap from game theory. We provide novel
convergence guarantees on this objective and demonstrate why the obtained limit
point solves the problem better than known techniques.
Related papers
- WANCO: Weak Adversarial Networks for Constrained Optimization problems [5.257895611010853]
We first transform minimax problems into minimax problems using the augmented Lagrangian method.
We then use two (or several) deep neural networks to represent the primal and dual variables respectively.
The parameters in the neural networks are then trained by an adversarial process.
arXiv Detail & Related papers (2024-07-04T05:37:48Z) - Near-Optimal Solutions of Constrained Learning Problems [85.48853063302764]
In machine learning systems, the need to curtail their behavior has become increasingly apparent.
This is evidenced by recent advancements towards developing models that satisfy dual robustness variables.
Our results show that rich parametrizations effectively mitigate non-dimensional, finite learning problems.
arXiv Detail & Related papers (2024-03-18T14:55:45Z) - A Constrained Optimization Approach to Bilevel Optimization with
Multiple Inner Minima [49.320758794766185]
We propose a new approach, which convert the bilevel problem to an equivalent constrained optimization, and then the primal-dual algorithm can be used to solve the problem.
Such an approach enjoys a few advantages including (a) addresses the multiple inner minima challenge; (b) fully first-order efficiency without Jacobian computations.
arXiv Detail & Related papers (2022-03-01T18:20:01Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - A Decentralized Adaptive Momentum Method for Solving a Class of Min-Max
Optimization Problems [9.653157073271021]
We develop a decentralized adaptive momentum (ADAM)-type algorithm for solving min-max optimization problem.
We obtain non-asymptotic rates of convergence of the proposed algorithm for finding a (stochastic) first-order Nash equilibrium point.
arXiv Detail & Related papers (2021-06-10T22:32:01Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - The Landscape of the Proximal Point Method for Nonconvex-Nonconcave
Minimax Optimization [10.112779201155005]
Minimax PPM has become a central tool in machine learning with applications in robust, reinforcement learning, GANs, etc.
These applications are often non-concave, but existing theory is unable to identify this and the fundamental difficulties.
arXiv Detail & Related papers (2020-06-15T18:17:00Z) - Non-convex Min-Max Optimization: Applications, Challenges, and Recent
Theoretical Advances [58.54078318403909]
The min-max problem, also known as the saddle point problem, is a class adversarial problem which is also studied in the context ofsum games.
arXiv Detail & Related papers (2020-06-15T05:33:42Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.