Ultracoarse Equilibria and Ordinal-Folding Dynamics in Operator-Algebraic Models of Infinite Multi-Agent Games
- URL: http://arxiv.org/abs/2507.19694v1
- Date: Fri, 25 Jul 2025 22:20:42 GMT
- Title: Ultracoarse Equilibria and Ordinal-Folding Dynamics in Operator-Algebraic Models of Infinite Multi-Agent Games
- Authors: Faruk Alpay, Hamdi Alakkad, Bugra Kilictas, Taylan Alpay,
- Abstract summary: We develop an operator algebraic framework for infinite games with a continuum of agents.<n>We prove that regret based learning dynamics governed by a noncommutative continuity equation converge to a unique quantal response equilibrium.<n>We introduce the ordinal folding index, a computable ordinal valued metric that measures the self referential depth of the dynamics.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop an operator algebraic framework for infinite games with a continuum of agents and prove that regret based learning dynamics governed by a noncommutative continuity equation converge to a unique quantal response equilibrium under mild regularity assumptions. The framework unifies functional analysis, coarse geometry and game theory by assigning to every game a von Neumann algebra that represents collective strategy evolution. A reflective regret operator within this algebra drives the flow of strategy distributions and its fixed point characterises equilibrium. We introduce the ordinal folding index, a computable ordinal valued metric that measures the self referential depth of the dynamics, and show that it bounds the transfinite time needed for convergence, collapsing to zero on coarsely amenable networks. The theory yields new invariant subalgebra rigidity results, establishes existence and uniqueness of envy free and maximin share allocations in continuum economies, and links analytic properties of regret flows with empirical stability phenomena in large language models. These contributions supply a rigorous mathematical foundation for large scale multi agent systems and demonstrate the utility of ordinal metrics for equilibrium selection.
Related papers
- A Mean-Field Theory of $Θ$-Expectations [2.1756081703276]
We develop a new class of calculus for such non-linear models.<n>Theta-Expectation is shown to be consistent with the axiom of subaddivity.
arXiv Detail & Related papers (2025-07-30T11:08:56Z) - Why Neural Network Can Discover Symbolic Structures with Gradient-based Training: An Algebraic and Geometric Foundation for Neurosymbolic Reasoning [73.18052192964349]
We develop a theoretical framework that explains how discrete symbolic structures can emerge naturally from continuous neural network training dynamics.<n>By lifting neural parameters to a measure space and modeling training as Wasserstein gradient flow, we show that under geometric constraints, the parameter measure $mu_t$ undergoes two concurrent phenomena.
arXiv Detail & Related papers (2025-06-26T22:40:30Z) - Generative System Dynamics in Recurrent Neural Networks [56.958984970518564]
We investigate the continuous time dynamics of Recurrent Neural Networks (RNNs)<n>We show that skew-symmetric weight matrices are fundamental to enable stable limit cycles in both linear and nonlinear configurations.<n> Numerical simulations showcase how nonlinear activation functions not only maintain limit cycles, but also enhance the numerical stability of the system integration process.
arXiv Detail & Related papers (2025-04-16T10:39:43Z) - Multi-Agent Q-Learning Dynamics in Random Networks: Convergence due to Exploration and Sparsity [5.925608009772727]
We study Q-learning dynamics in network polymatrix games where the network structure is drawn from random graph models.<n>In each setting, we establish sufficient conditions under which the agents' joint strategies converge to a unique equilibrium.<n>We validate our theoretical findings through numerical simulations and demonstrate that convergence can be reliably achieved in many-agent systems.
arXiv Detail & Related papers (2025-03-13T09:16:51Z) - A Mathematical Analysis of Neural Operator Behaviors [0.0]
This paper presents a rigorous framework for analyzing the behaviors of neural operators.
We focus on their stability, convergence, clustering dynamics, universality, and generalization error.
We aim to offer clear and unified guidance in a single setting for the future design of neural operator-based methods.
arXiv Detail & Related papers (2024-10-28T19:38:53Z) - Transition of $α$-mixing in Random Iterations with Applications in Queuing Theory [0.0]
We show the transfer of mixing properties from the exogenous regressor to the response via coupling arguments.<n>We also study Markov chains in random environments with drift and minorization conditions, even under non-stationary environments.
arXiv Detail & Related papers (2024-10-07T14:13:37Z) - Exploiting hidden structures in non-convex games for convergence to Nash
equilibrium [62.88214569402201]
A wide array of modern machine learning applications can be formulated as non-cooperative Nashlibria.
We provide explicit convergence guarantees for both deterministic and deterministic environments.
arXiv Detail & Related papers (2023-12-27T15:21:25Z) - Discrete Lagrangian Neural Networks with Automatic Symmetry Discovery [17.736465741047315]
We introduce a framework to learn a discrete Lagrangian along with its symmetry group from discrete observations of motions.<n>The learning process does not restrict the form of the Lagrangian, does not require velocity or momentum observations or predictions and incorporates a cost term.
arXiv Detail & Related papers (2022-11-20T00:46:33Z) - Decimation technique for open quantum systems: a case study with
driven-dissipative bosonic chains [62.997667081978825]
Unavoidable coupling of quantum systems to external degrees of freedom leads to dissipative (non-unitary) dynamics.
We introduce a method to deal with these systems based on the calculation of (dissipative) lattice Green's function.
We illustrate the power of this method with several examples of driven-dissipative bosonic chains of increasing complexity.
arXiv Detail & Related papers (2022-02-15T19:00:09Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.