Bridging Mean-Field Games and Normalizing Flows with Trajectory
Regularization
- URL: http://arxiv.org/abs/2206.14990v1
- Date: Thu, 30 Jun 2022 02:44:39 GMT
- Title: Bridging Mean-Field Games and Normalizing Flows with Trajectory
Regularization
- Authors: Han Huang and Jiajia Yu and Jie Chen and Rongjie Lai
- Abstract summary: Mean-field games (MFGs) are a modeling framework for systems with a large number of interacting agents.
Normalizing flows (NFs) are a family of deep generative models that compute data likelihoods by using an invertible mapping.
In this work, we unravel the connections between MFGs and NFs by contextualizing the training of an NF as solving the MFG.
- Score: 11.517089115158225
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Mean-field games (MFGs) are a modeling framework for systems with a large
number of interacting agents. They have applications in economics, finance, and
game theory. Normalizing flows (NFs) are a family of deep generative models
that compute data likelihoods by using an invertible mapping, which is
typically parameterized by using neural networks. They are useful for density
modeling and data generation. While active research has been conducted on both
models, few noted the relationship between the two. In this work, we unravel
the connections between MFGs and NFs by contextualizing the training of an NF
as solving the MFG. This is achieved by reformulating the MFG problem in terms
of agent trajectories and parameterizing a discretization of the resulting MFG
with flow architectures. With this connection, we explore two research
directions. First, we employ expressive NF architectures to accurately solve
high-dimensional MFGs, sidestepping the curse of dimensionality in traditional
numerical methods. Compared with other deep learning approaches, our
trajectory-based formulation encodes the continuity equation in the neural
network, resulting in a better approximation of the population dynamics.
Second, we regularize the training of NFs with transport costs and show the
effectiveness on controlling the model's Lipschitz bound, resulting in better
generalization performance. We demonstrate numerical results through
comprehensive experiments on a variety of synthetic and real-life datasets.
Related papers
- FFHFlow: A Flow-based Variational Approach for Multi-fingered Grasp Synthesis in Real Time [19.308304984645684]
We propose exploiting a special kind of Deep Generative Model (DGM) based on Normalizing Flows (NFs)
We first observed an encouraging improvement in diversity by directly applying a single conditional NFs (cNFs) to learn a grasp distribution conditioned on the incomplete point cloud.
This motivated us to develop a novel flow-based d Deep Latent Variable Model (DLVM)
Unlike Variational Autoencoders (VAEs), the proposed DLVM counteracts typical pitfalls by leveraging two cNFs for the prior and likelihood distributions.
arXiv Detail & Related papers (2024-07-21T13:33:08Z) - Neuroexplicit Diffusion Models for Inpainting of Optical Flow Fields [8.282495481952784]
We show how to bring model- and data-driven approaches together by combining the explicit PDE-based approaches with convolutional neural networks.
Our model outperforms both fully explicit and fully data-driven baselines in terms of reconstruction quality, robustness and amount of required training data.
arXiv Detail & Related papers (2024-05-23T14:14:27Z) - FMint: Bridging Human Designed and Data Pretrained Models for Differential Equation Foundation Model [5.748690310135373]
We propose a novel multi-modal foundation model, named textbfFMint, to bridge the gap between human-designed and data-driven models.
Built on a decoder-only transformer architecture with in-context learning, FMint utilizes both numerical and textual data to learn a universal error correction scheme.
Our results demonstrate the effectiveness of the proposed model in terms of both accuracy and efficiency compared to classical numerical solvers.
arXiv Detail & Related papers (2024-04-23T02:36:47Z) - A Deep Dive into the Connections Between the Renormalization Group and
Deep Learning in the Ising Model [0.0]
Renormalization group (RG) is an essential technique in statistical physics and quantum field theory.
We develop extensive renormalization techniques for the 1D and 2D Ising model to provide a baseline for comparison.
For the 2D Ising model, we successfully generated Ising model samples using the Wolff algorithm, and performed the group flow using a quasi-deterministic method.
arXiv Detail & Related papers (2023-08-21T22:50:54Z) - Moser Flow: Divergence-based Generative Modeling on Manifolds [49.04974733536027]
Moser Flow (MF) is a new class of generative models within the family of continuous normalizing flows (CNF)
MF does not require invoking or backpropagating through an ODE solver during training.
We demonstrate for the first time the use of flow models for sampling from general curved surfaces.
arXiv Detail & Related papers (2021-08-18T09:00:24Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z) - Learning Gaussian Graphical Models with Latent Confounders [74.72998362041088]
We compare and contrast two strategies for inference in graphical models with latent confounders.
While these two approaches have similar goals, they are motivated by different assumptions about confounding.
We propose a new method, which combines the strengths of these two approaches.
arXiv Detail & Related papers (2021-05-14T00:53:03Z) - Efficient Construction of Nonlinear Models over Normalized Data [21.531781003420573]
We show how it is possible to decompose in a systematic way both for binary joins and for multi-way joins to construct mixture models.
We present algorithms that can conduct the training of the network in a factorized way and offer performance advantages.
arXiv Detail & Related papers (2020-11-23T19:20:03Z) - Learning Likelihoods with Conditional Normalizing Flows [54.60456010771409]
Conditional normalizing flows (CNFs) are efficient in sampling and inference.
We present a study of CNFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x)
arXiv Detail & Related papers (2019-11-29T19:17:58Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.