Operator Learning for Families of Finite-State Mean-Field Games
- URL: http://arxiv.org/abs/2602.13169v1
- Date: Fri, 13 Feb 2026 18:28:34 GMT
- Title: Operator Learning for Families of Finite-State Mean-Field Games
- Authors: William Hofgard, Asaf Cohen, Mathieu Laurière,
- Abstract summary: Finite-state mean-field games (MFGs) arise as limits of large interacting particle systems.<n>We propose an operator learning framework that solves parametric families of MFGs.<n>We provide theoretical guarantees on the approximation error, parametric complexity, and generalization performance of our method.
- Score: 10.903750657949244
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Finite-state mean-field games (MFGs) arise as limits of large interacting particle systems and are governed by an MFG system, a coupled forward-backward differential equation consisting of a forward Kolmogorov-Fokker-Planck (KFP) equation describing the population distribution and a backward Hamilton-Jacobi-Bellman (HJB) equation defining the value function. Solving MFG systems efficiently is challenging, with the structure of each system depending on an initial distribution of players and the terminal cost of the game. We propose an operator learning framework that solves parametric families of MFGs, enabling generalization without retraining for new initial distributions and terminal costs. We provide theoretical guarantees on the approximation error, parametric complexity, and generalization performance of our method, based on a novel regularity result for an appropriately defined flow map corresponding to an MFG system. We demonstrate empirically that our framework achieves accurate approximation for two representative instances of MFGs: a cybersecurity example and a high-dimensional quadratic model commonly used as a benchmark for numerical methods for MFGs.
Related papers
- High-dimensional Mean-Field Games by Particle-based Flow Matching [18.129646808071893]
Mean-field games (MFGs) study the Nash equilibrium of systems with a continuum of interacting agents.<n>Despite their broad applicability, solving high-dimensional MFGs remains a significant challenge due to fundamental computational and analytical obstacles.<n>We propose a particle-based deep Flow Matching (FM) method to tackle high-dimensional MFGs.
arXiv Detail & Related papers (2025-12-01T01:04:53Z) - A Theory of Multi-Agent Generative Flow Networks [65.53605277612444]
We propose a theoretical framework for multi-agent generative flow networks (MA-GFlowNets)<n>MA-GFlowNets can be applied to multiple agents to generate objects collaboratively through a series of joint actions.<n>Joint Flow training is based on a local-global principle allowing to train a collection of (local) GFN as a unique (global) GFN.
arXiv Detail & Related papers (2025-09-24T04:01:21Z) - FMIP: Joint Continuous-Integer Flow For Mixed-Integer Linear Programming [52.52020895303244]
Mixed-Integer Linear Programming (MILP) is a foundational tool for complex decision-making problems.<n>We propose Joint Continuous-Integer Flow for Mixed-Integer Linear Programming (FMIP), which is the first generative framework that models joint distribution of both integer and continuous variables for MILP solutions.<n>FMIP is fully compatible with arbitrary backbone networks and various downstream solvers, making it well-suited for a broad range of real-world MILP applications.
arXiv Detail & Related papers (2025-07-31T10:03:30Z) - Stochastic Semi-Gradient Descent for Learning Mean Field Games with Population-Aware Function Approximation [16.00164239349632]
Mean field games (MFGs) model interactions in large-population multi-agent systems through population distributions.<n>Traditional learning methods for MFGs are based on fixed-point iteration (FPI), where policy updates and induced population distributions are computed separately and sequentially.<n>We propose a novel perspective that treats the policy and population as a unified parameter controlling the game dynamics.
arXiv Detail & Related papers (2024-08-15T14:51:50Z) - Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RL [57.745700271150454]
We study the sample complexity of reinforcement learning in Mean-Field Games (MFGs) with model-based function approximation.
We introduce the Partial Model-Based Eluder Dimension (P-MBED), a more effective notion to characterize the model class complexity.
arXiv Detail & Related papers (2024-02-08T14:54:47Z) - On the Statistical Efficiency of Mean-Field Reinforcement Learning with General Function Approximation [20.66437196305357]
We study the fundamental statistical efficiency of Reinforcement Learning in Mean-Field Control (MFC) and Mean-Field Game (MFG) with general model-based function approximation.
We introduce a new concept called Mean-Field Model-Based Eluder Dimension (MF-MBED), which characterizes the inherent complexity of mean-field model classes.
arXiv Detail & Related papers (2023-05-18T20:00:04Z) - Individual-Level Inverse Reinforcement Learning for Mean Field Games [16.79251229846642]
Mean Field IRL (MFIRL) is the first dedicated IRL framework for MFGs that can handle both cooperative and non-cooperative environments.
We develop a practical algorithm effective for MFGs with unknown dynamics.
arXiv Detail & Related papers (2022-02-13T20:35:01Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Type-2 fuzzy reliability redundancy allocation problem and its solution
using particle swarm optimization algorithm [9.760638545828497]
The fuzzy multi-objective reliability redundancy allocation problem (FMORRAP) is proposed.
FMORRAP is proposed, which maximizes the system reliability while simultaneously minimizing the system cost.
arXiv Detail & Related papers (2020-05-02T15:39:54Z) - A General Framework for Learning Mean-Field Games [10.483303456655058]
This paper presents a general mean-field game (GMFG) framework for simultaneous learning and decision-making in games with a large population.
It then proposes value-based and policy-based reinforcement learning algorithms with smoothed policies.
Experiments on an equilibrium product pricing problem demonstrate that GMF-V-Q and GMF-P-TRPO, two specific instantiations of GMF-V and GMF-P, respectively, with Q-learning and TRPO, are both efficient and robust in the GMFG setting.
arXiv Detail & Related papers (2020-03-13T00:27:57Z) - Polynomial-Time Exact MAP Inference on Discrete Models with Global
Dependencies [83.05591911173332]
junction tree algorithm is the most general solution for exact MAP inference with run-time guarantees.
We propose a new graph transformation technique via node cloning which ensures a run-time for solving our target problem independently of the form of a corresponding clique tree.
arXiv Detail & Related papers (2019-12-27T13:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.