Mean Field Game GAN
- URL: http://arxiv.org/abs/2103.07855v1
- Date: Sun, 14 Mar 2021 06:34:38 GMT
- Title: Mean Field Game GAN
- Authors: Shaojun Ma, Haomin Zhou, Hongyuan Zha
- Abstract summary: We propose a novel mean field games (MFGs) based GAN(generative adversarial network) framework.
We utilize the Hopf formula in density space to rewrite MFGs as a primal-dual problem so that we are able to train the model via neural networks and samples.
- Score: 55.445402222849474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel mean field games (MFGs) based GAN(generative adversarial
network) framework. To be specific, we utilize the Hopf formula in density
space to rewrite MFGs as a primal-dual problem so that we are able to train the
model via neural networks and samples. Our model is flexible due to the freedom
of choosing various functionals within the Hopf formula. Moreover, our
formulation mathematically avoids Lipschitz-1 constraint. The correctness and
efficiency of our method are validated through several experiments.
Related papers
- Hopfield-Fenchel-Young Networks: A Unified Framework for Associative Memory Retrieval [25.841394444834933]
Associative memory models, such as Hopfield networks, have garnered renewed interest due to advancements in memory capacity and connections with self-attention in transformers.
In this work, we introduce a unified framework-Hopfield-Fenchel-Young networks-which generalizes these models to a broader family of energy functions.
arXiv Detail & Related papers (2024-11-13T13:13:07Z) - Score-fPINN: Fractional Score-Based Physics-Informed Neural Networks for High-Dimensional Fokker-Planck-Levy Equations [24.86574584293979]
We introduce an innovative approach for solving high-dimensional Fokker-Planck-L'evy (FPL) equations in modeling non-Brownian processes.
We utilize a fractional score function and Physical-informed neural networks (PINN) to lift the curse of dimensionality (CoD) and alleviate numerical overflow from exponentially decaying solutions with dimensions.
arXiv Detail & Related papers (2024-06-17T15:57:23Z) - Learning Discrete-Time Major-Minor Mean Field Games [61.09249862334384]
We propose a novel discrete time version of major-minor MFGs (M3FGs) and a learning algorithm based on fictitious play and partitioning the probability simplex.
M3FGs generalize MFGs with common noise and can handle not only random exogeneous environment states but also major players.
arXiv Detail & Related papers (2023-12-17T18:22:08Z) - Diffusion models for probabilistic programming [56.47577824219207]
Diffusion Model Variational Inference (DMVI) is a novel method for automated approximate inference in probabilistic programming languages (PPLs)
DMVI is easy to implement, allows hassle-free inference in PPLs without the drawbacks of, e.g., variational inference using normalizing flows, and does not make any constraints on the underlying neural network model.
arXiv Detail & Related papers (2023-11-01T12:17:05Z) - NPEFF: Non-Negative Per-Example Fisher Factorization [52.44573961263344]
We introduce a novel interpretability method called NPEFF that is readily applicable to any end-to-end differentiable model.
We demonstrate that NPEFF has interpretable tunings through experiments on language and vision models.
arXiv Detail & Related papers (2023-10-07T02:02:45Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Bridging Mean-Field Games and Normalizing Flows with Trajectory
Regularization [11.517089115158225]
Mean-field games (MFGs) are a modeling framework for systems with a large number of interacting agents.
Normalizing flows (NFs) are a family of deep generative models that compute data likelihoods by using an invertible mapping.
In this work, we unravel the connections between MFGs and NFs by contextualizing the training of an NF as solving the MFG.
arXiv Detail & Related papers (2022-06-30T02:44:39Z) - Normalizing Field Flows: Solving forward and inverse stochastic
differential equations using Physics-Informed flow model [8.455584500599807]
We introduce in this work the normalizing field flows (NFF) for learning random fields from scattered measurements.
We demonstrate the capability of the proposed NFF model for learning Non Gaussian processes, mixed Gaussian processes, and forward & inverse partial differential equations.
arXiv Detail & Related papers (2021-08-30T01:58:01Z) - Learning High-Dimensional Distributions with Latent Neural Fokker-Planck
Kernels [67.81799703916563]
We introduce new techniques to formulate the problem as solving Fokker-Planck equation in a lower-dimensional latent space.
Our proposed model consists of latent-distribution morphing, a generator and a parameterized Fokker-Planck kernel function.
arXiv Detail & Related papers (2021-05-10T17:42:01Z) - Alternating the Population and Control Neural Networks to Solve
High-Dimensional Stochastic Mean-Field Games [9.909883019034613]
We present an alternating population and agent control neural network for solving mean field games (MFGs)
Our algorithm is geared toward high-dimensional instances of MFGs that are beyond reach with existing solution methods.
We show the potential of our method on up to 100-dimensional MFG problems.
arXiv Detail & Related papers (2020-02-24T08:24:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.