Connecting GANs, MFGs, and OT
- URL: http://arxiv.org/abs/2002.04112v4
- Date: Sat, 4 Sep 2021 14:59:02 GMT
- Title: Connecting GANs, MFGs, and OT
- Authors: Haoyang Cao, Xin Guo, Mathieu Lauri\`ere
- Abstract summary: Generative adversarial networks (GANs) have enjoyed tremendous success in image generation and processing.
This paper analyzes GANs from the perspectives of mean-field games (MFGs) and optimal transport.
- Score: 4.530876736231948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) have enjoyed tremendous success in
image generation and processing, and have recently attracted growing interests
in financial modelings. This paper analyzes GANs from the perspectives of
mean-field games (MFGs) and optimal transport. More specifically, from the game
theoretical perspective, GANs are interpreted as MFGs under Pareto Optimality
criterion or mean-field controls; from the optimal transport perspective, GANs
are to minimize the optimal transport cost indexed by the generator from the
known latent distribution to the unknown true distribution of data. The MFGs
perspective of GANs leads to a GAN-based computational method (MFGANs) to solve
MFGs: one neural network for the backward Hamilton-Jacobi-Bellman equation and
one neural network for the forward Fokker-Planck equation, with the two neural
networks trained in an adversarial way. Numerical experiments demonstrate
superior performance of this proposed algorithm, especially in the higher
dimensional case, when compared with existing neural network approaches.
Related papers
- Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - On the Convergence of (Stochastic) Gradient Descent for Kolmogorov--Arnold Networks [56.78271181959529]
Kolmogorov--Arnold Networks (KANs) have gained significant attention in the deep learning community.
Empirical investigations demonstrate that KANs optimized via gradient descent (SGD) are capable of achieving near-zero training loss.
arXiv Detail & Related papers (2024-10-10T15:34:10Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Bridging Mean-Field Games and Normalizing Flows with Trajectory
Regularization [11.517089115158225]
Mean-field games (MFGs) are a modeling framework for systems with a large number of interacting agents.
Normalizing flows (NFs) are a family of deep generative models that compute data likelihoods by using an invertible mapping.
In this work, we unravel the connections between MFGs and NFs by contextualizing the training of an NF as solving the MFG.
arXiv Detail & Related papers (2022-06-30T02:44:39Z) - Training Sparse Neural Network by Constraining Synaptic Weight on Unit
Lp Sphere [2.429910016019183]
constraining the synaptic weights on unit Lp-sphere enables the flexibly control of the sparsity with p.
Our approach is validated by experiments on benchmark datasets covering a wide range of domains.
arXiv Detail & Related papers (2021-03-30T01:02:31Z) - An Adversarial Human Pose Estimation Network Injected with Graph
Structure [75.08618278188209]
In this paper, we design a novel generative adversarial network (GAN) to improve the localization accuracy of visible joints when some joints are invisible.
The network consists of two simple but efficient modules, Cascade Feature Network (CFN) and Graph Structure Network (GSN)
arXiv Detail & Related papers (2021-03-29T12:07:08Z) - Joint User Association and Power Allocation in Heterogeneous Ultra Dense
Network via Semi-Supervised Representation Learning [22.725452912879376]
Heterogeneous Ultra-Dense Network (HUDN) can enable higher connectivity density and ultra-high data rates.
This paper proposes a novel idea for resolving the joint user association and power control problem.
We train a Graph Neural Network (GNN) to approach this representation function by using semi-supervised learning.
arXiv Detail & Related papers (2021-03-29T06:39:51Z) - Resource Allocation via Graph Neural Networks in Free Space Optical
Fronthaul Networks [119.81868223344173]
This paper investigates the optimal resource allocation in free space optical (FSO) fronthaul networks.
We consider the graph neural network (GNN) for the policy parameterization to exploit the FSO network structure.
The primal-dual learning algorithm is developed to train the GNN in a model-free manner, where the knowledge of system models is not required.
arXiv Detail & Related papers (2020-06-26T14:20:48Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - Towards GANs' Approximation Ability [8.471366736328811]
This paper will first theoretically analyze GANs' approximation property.
We prove that the generator with the input latent variable in GANs can universally approximate the potential data distribution.
In the practical dataset, four GANs using SDG can also outperform the corresponding traditional GANs when the model architectures are smaller.
arXiv Detail & Related papers (2020-04-10T02:40:16Z) - Fractional order graph neural network [28.229115966246937]
This paper proposes fractional order graph neural networks (FGNNs) to address the challenges of local optimum of classic and fractional graph neural networks.
The approximate calculation of fractional order gradients also overcomes the high computational complexity of fractional order derivations.
arXiv Detail & Related papers (2020-01-05T11:55:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.