Adaptive Consensus Optimization Method for GANs
- URL: http://arxiv.org/abs/2304.10317v1
- Date: Thu, 20 Apr 2023 13:50:42 GMT
- Title: Adaptive Consensus Optimization Method for GANs
- Authors: Sachin Kumar Danisetty, Santhosh Reddy Mylaram, Pawan Kumar
- Abstract summary: We propose a second order gradient based method with ADAM and RMSprop for the training of generative adversarial networks.
We derive the fixed point iteration corresponding to proposed method, and show that the proposed method is convergent.
The proposed method produces better or comparable inception scores, and comparable quality of images compared to other recently proposed state-of-the-art second order methods.
- Score: 2.1227526213206542
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a second order gradient based method with ADAM and RMSprop for the
training of generative adversarial networks. The proposed method is fastest to
obtain similar accuracy when compared to prominent second order methods. Unlike
state-of-the-art recent methods, it does not require solving a linear system,
or it does not require additional mixed second derivative terms. We derive the
fixed point iteration corresponding to proposed method, and show that the
proposed method is convergent. The proposed method produces better or
comparable inception scores, and comparable quality of images compared to other
recently proposed state-of-the-art second order methods. Compared to first
order methods such as ADAM, it produces significantly better inception scores.
The proposed method is compared and validated on popular datasets such as FFHQ,
LSUN, CIFAR10, MNIST, and Fashion MNIST for image generation
tasks\footnote{Accepted in IJCNN 2023}. Codes:
\url{https://github.com/misterpawan/acom}
Related papers
- Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment [81.84950252537618]
This paper reveals a unified game-theoretic connection between iterative BOND and self-play alignment.
We establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization.
arXiv Detail & Related papers (2024-10-28T04:47:39Z) - Curriculum Direct Preference Optimization for Diffusion and Consistency Models [110.08057135882356]
We propose a novel and enhanced version of DPO based on curriculum learning for text-to-image generation.
Our approach, Curriculum DPO, is compared against state-of-the-art fine-tuning approaches on three benchmarks.
arXiv Detail & Related papers (2024-05-22T13:36:48Z) - A Gauss-Newton Approach for Min-Max Optimization in Generative Adversarial Networks [7.217857709620766]
A novel first-order method is proposed for training generative adversarial networks (GANs)
It modifies the Gauss-Newton method to approximate the min-max Hessian and uses the Sherman-Morrison inversion formula to calculate the inverse.
Our method is capable of generating high-fidelity images with greater diversity across multiple datasets.
arXiv Detail & Related papers (2024-04-10T17:08:46Z) - On the convergence of adaptive first order methods: proximal gradient and alternating minimization algorithms [4.307128674848627]
AdaPG$q,r$ is a framework that unifies and extends existing results by providing larger stepsize policies and improved lower bounds.
Different choices of the parameters $q$ and $r$ are discussed and the efficacy of the resulting methods is demonstrated through numerical simulations.
arXiv Detail & Related papers (2023-11-30T10:29:43Z) - Plug-and-Play split Gibbs sampler: embedding deep generative priors in
Bayesian inference [12.91637880428221]
This paper introduces a plug-and-play sampling algorithm that leverages variable splitting to efficiently sample from a posterior distribution.
It divides the challenging task of posterior sampling into two simpler sampling problems.
Its performance is compared to recent state-of-the-art optimization and sampling methods.
arXiv Detail & Related papers (2023-04-21T17:17:51Z) - FedDA: Faster Framework of Local Adaptive Gradient Methods via Restarted
Dual Averaging [104.41634756395545]
Federated learning (FL) is an emerging learning paradigm to tackle massively distributed data.
We propose textbfFedDA, a novel framework for local adaptive gradient methods.
We show that textbfFedDA-MVR is the first adaptive FL algorithm that achieves this rate.
arXiv Detail & Related papers (2023-02-13T05:10:30Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - Efficient ADMM-based Algorithms for Convolutional Sparse Coding [38.31173467674558]
This letter presents a solution to a convolutional least-squares fitting subproblem.
We also use the same approach for developing an efficient convolutional dictionary learning method.
We propose a novel algorithm for convolutional sparse coding with a constraint on the approximation error.
arXiv Detail & Related papers (2021-09-07T09:49:10Z) - Methods of ranking for aggregated fuzzy numbers from interval-valued
data [0.0]
This paper primarily presents two methods of ranking aggregated fuzzy numbers from intervals using the Interval Agreement Approach (IAA)
The shortcomings of previous measures, along with the improvements of the proposed methods, are illustrated using both a synthetic and real-world application.
arXiv Detail & Related papers (2020-12-03T02:56:15Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - Holistically-Attracted Wireframe Parsing [123.58263152571952]
This paper presents a fast and parsimonious parsing method to detect a vectorized wireframe in an input image with a single forward pass.
The proposed method is end-to-end trainable, consisting of three components: (i) line segment and junction proposal generation, (ii) line segment and junction matching, and (iii) line segment and junction verification.
arXiv Detail & Related papers (2020-03-03T17:43:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.