Dynamics of Fourier Modes in Torus Generative Adversarial Networks
- URL: http://arxiv.org/abs/2209.01842v1
- Date: Mon, 5 Sep 2022 09:03:22 GMT
- Title: Dynamics of Fourier Modes in Torus Generative Adversarial Networks
- Authors: \'Angel Gonz\'alez-Prieto, Alberto Mozo, Edgar Talavera and Sandra
G\'omez-Canaval
- Abstract summary: Generative Adversarial Networks (GANs) are powerful Machine Learning models capable of generating fully synthetic samples of a desired phenomenon with a high resolution.
Despite their success, the training process of a GAN is highly unstable and typically it is necessary to implement several accessory perturbations to the networks to reach an acceptable convergence of the model.
We introduce a novel method to analyze the convergence and stability in the training of Generative Adversarial Networks.
- Score: 0.8189696720657245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) are powerful Machine Learning models
capable of generating fully synthetic samples of a desired phenomenon with a
high resolution. Despite their success, the training process of a GAN is highly
unstable and typically it is necessary to implement several accessory
heuristics to the networks to reach an acceptable convergence of the model. In
this paper, we introduce a novel method to analyze the convergence and
stability in the training of Generative Adversarial Networks. For this purpose,
we propose to decompose the objective function of the adversary min-max game
defining a periodic GAN into its Fourier series. By studying the dynamics of
the truncated Fourier series for the continuous Alternating Gradient Descend
algorithm, we are able to approximate the real flow and to identify the main
features of the convergence of the GAN. This approach is confirmed empirically
by studying the training flow in a $2$-parametric GAN aiming to generate an
unknown exponential distribution. As byproduct, we show that convergent orbits
in GANs are small perturbations of periodic orbits so the Nash equillibria are
spiral attractors. This theoretically justifies the slow and unstable training
observed in GANs.
Related papers
- Parallelly Tempered Generative Adversarial Networks [7.94957965474334]
A generative adversarial network (GAN) has been a representative backbone model in generative artificial intelligence (AI)
This work analyzes the training instability and inefficiency in the presence of mode collapse by linking it to multimodality in the target distribution.
With our newly developed GAN objective function, the generator can learn all the tempered distributions simultaneously.
arXiv Detail & Related papers (2024-11-18T18:01:13Z) - On the Convergence of (Stochastic) Gradient Descent for Kolmogorov--Arnold Networks [56.78271181959529]
Kolmogorov--Arnold Networks (KANs) have gained significant attention in the deep learning community.
Empirical investigations demonstrate that KANs optimized via gradient descent (SGD) are capable of achieving near-zero training loss.
arXiv Detail & Related papers (2024-10-10T15:34:10Z) - FAN: Fourier Analysis Networks [47.08787684221114]
We propose FAN, which empowers the ability to efficiently model and reason about periodic phenomena.
We demonstrate the effectiveness of FAN in modeling and reasoning about periodic functions, and the superiority and generalizability of FAN across a range of real-world tasks.
arXiv Detail & Related papers (2024-10-03T17:02:21Z) - Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - Accurate generation of stochastic dynamics based on multi-model
Generative Adversarial Networks [0.0]
Generative Adversarial Networks (GANs) have shown immense potential in fields such as text and image generation.
Here we quantitatively test this approach by applying it to a prototypical process on a lattice.
Importantly, the discreteness of the model is retained despite the noise.
arXiv Detail & Related papers (2023-05-25T10:41:02Z) - Effective Dynamics of Generative Adversarial Networks [16.51305515824504]
Generative adversarial networks (GANs) are a class of machine-learning models that use adversarial training to generate new samples.
One major form of training failure, known as mode collapse, involves the generator failing to reproduce the full diversity of modes in the target probability distribution.
We present an effective model of GAN training, which captures the learning dynamics by replacing the generator neural network with a collection of particles in the output space.
arXiv Detail & Related papers (2022-12-08T22:04:01Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - Forward Super-Resolution: How Can GANs Learn Hierarchical Generative
Models for Real-World Distributions [66.05472746340142]
Generative networks (GAN) are among the most successful for learning high-complexity, real-world distributions.
In this paper we show how GANs can efficiently learn to the distribution of real-life images.
arXiv Detail & Related papers (2021-06-04T17:33:29Z) - Convergence dynamics of Generative Adversarial Networks: the dual metric
flows [0.0]
We investigate convergence in the Generative Adversarial Networks used in machine learning.
We study the limit of small learning rate, and show that, similar to single network training, the GAN learning dynamics tend to vanish to some limit dynamics.
arXiv Detail & Related papers (2020-12-18T18:00:12Z) - Training Generative Adversarial Networks by Solving Ordinary
Differential Equations [54.23691425062034]
We study the continuous-time dynamics induced by GAN training.
From this perspective, we hypothesise that instabilities in training GANs arise from the integration error.
We experimentally verify that well-known ODE solvers (such as Runge-Kutta) can stabilise training.
arXiv Detail & Related papers (2020-10-28T15:23:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.