A game-theoretic approach for Generative Adversarial Networks
- URL: http://arxiv.org/abs/2003.13637v2
- Date: Mon, 14 Sep 2020 16:27:50 GMT
- Title: A game-theoretic approach for Generative Adversarial Networks
- Authors: Barbara Franci and Sergio Grammatico
- Abstract summary: Generative adversarial networks (GANs) are a class of generative models, known for producing accurate samples.
Main bottleneck for their implementation is that the neural networks are very hard to train.
We propose a relaxed forward-backward algorithm for GANs.
We prove that when the pseudogradient mapping of the game is monotone, we have convergence to an exact solution or in a neighbourhood of it.
- Score: 2.995087247817663
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) are a class of generative models,
known for producing accurate samples. The key feature of GANs is that there are
two antagonistic neural networks: the generator and the discriminator. The main
bottleneck for their implementation is that the neural networks are very hard
to train. One way to improve their performance is to design reliable algorithms
for the adversarial process. Since the training can be cast as a stochastic
Nash equilibrium problem, we rewrite it as a variational inequality and
introduce an algorithm to compute an approximate solution. Specifically, we
propose a stochastic relaxed forward-backward algorithm for GANs. We prove that
when the pseudogradient mapping of the game is monotone, we have convergence to
an exact solution or in a neighbourhood of it.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Generative Adversarial Learning of Sinkhorn Algorithm Initializations [0.0]
We show that meticulously training a neural network to learn initializations to the algorithm via the entropic OT dual problem can significantly speed up convergence.
We show that our network can even be used as a standalone OT solver to approximate regularized transport distances to a few percent error.
arXiv Detail & Related papers (2022-11-30T21:56:09Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - On the Reproducibility of Neural Network Predictions [52.47827424679645]
We study the problem of churn, identify factors that cause it, and propose two simple means of mitigating it.
We first demonstrate that churn is indeed an issue, even for standard image classification tasks.
We propose using emphminimum entropy regularizers to increase prediction confidences.
We present empirical results showing the effectiveness of both techniques in reducing churn while improving the accuracy of the underlying model.
arXiv Detail & Related papers (2021-02-05T18:51:01Z) - Training Generative Adversarial Networks via stochastic Nash games [2.995087247817663]
Generative adversarial networks (GANs) are a class of generative models with two antagonistic neural networks: a generator and a discriminator.
We show convergence to an exact solution when an increasing number of data is available.
We also show convergence of an averaged variant of the SRFB algorithm to a neighborhood of the solution when only few samples are available.
arXiv Detail & Related papers (2020-10-17T09:07:40Z) - Rotation Averaging with Attention Graph Neural Networks [4.408728798697341]
We propose a real-time and robust solution to large-scale multiple rotation averaging.
Our method uses all observations, suppressing outliers effects through the use of weighted averaging and an attention mechanism within the network design.
The result is a network that is faster, more robust and can be trained with less samples than the previous neural approach.
arXiv Detail & Related papers (2020-10-14T02:07:19Z) - How Powerful are Shallow Neural Networks with Bandlimited Random
Weights? [25.102870584507244]
We investigate the expressive power of limited depth-2 band random neural networks.
A random net is a neural network where the hidden layer parameters are frozen with random bandwidth.
arXiv Detail & Related papers (2020-08-19T13:26:12Z) - Bandit Samplers for Training Graph Neural Networks [63.17765191700203]
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs)
These sampling algorithms are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT)
arXiv Detail & Related papers (2020-06-10T12:48:37Z) - Generative Adversarial Trainer: Defense to Adversarial Perturbations
with GAN [13.561553183983774]
We propose a novel technique to make neural network robust to adversarial examples using a generative adversarial network.
The generator network generates an adversarial perturbation that can easily fool the classifier network by using a gradient of each image.
Our adversarial training framework efficiently reduces overfitting and outperforms other regularization methods such as Dropout.
arXiv Detail & Related papers (2017-05-09T15:30:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.