The Invisible Handshake: Tacit Collusion between Adaptive Market Agents
- URL: http://arxiv.org/abs/2510.15995v1
- Date: Tue, 14 Oct 2025 08:28:33 GMT
- Title: The Invisible Handshake: Tacit Collusion between Adaptive Market Agents
- Authors: Luigi Foscari, Emanuele Guidotti, Nicolò Cesa-Bianchi, Tatjana Chavdarova, Alfio Ferrara,
- Abstract summary: We study the emergence of tacit collusion between adaptive trading agents in a market with endogenous price formation.<n>We show that, when agents follow simple learning algorithms to maximize their own wealth, the resulting dynamics converge to collusive strategy profiles.
- Score: 16.94262518248427
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the emergence of tacit collusion between adaptive trading agents in a stochastic market with endogenous price formation. Using a two-player repeated game between a market maker and a market taker, we characterize feasible and collusive strategy profiles that raise prices beyond competitive levels. We show that, when agents follow simple learning algorithms (e.g., gradient ascent) to maximize their own wealth, the resulting dynamics converge to collusive strategy profiles, even in highly liquid markets with small trade sizes. By highlighting how simple learning strategies naturally lead to tacit collusion, our results offer new insights into the dynamics of AI-driven markets.
Related papers
- Strategic Self-Improvement for Competitive Agents in AI Labour Markets [45.88028371034407]
This paper puts forward a groundbreaking new framework that is the first to capture the real-world economic forces that shape agentic labor markets.<n>We illustrate our framework through a tractable simulated gig economy where agentic Large Language Models (LLMs) compete for jobs.<n>Our simulations reproduce classic macroeconomic phenomena found in human labor markets, while controlled experiments reveal potential AI-driven economic trends.
arXiv Detail & Related papers (2025-12-04T16:57:28Z) - Emergence from Emergence: Financial Market Simulation via Learning with Heterogeneous Preferences [3.722808691920657]
We develop a multi-agent reinforcement learning framework in which agents endowed with heterogeneous risk aversion, time discounting, and information access collectively learn trading strategies.<n>The experiment reveals that (i) learning with heterogeneous preferences drives agents to develop strategies aligned with their individual traits, fostering behavioral differentiation and niche specialization within the market, and (ii) the interactions by the differentiated agents are essential for the emergence of realistic market dynamics.
arXiv Detail & Related papers (2025-11-07T12:54:27Z) - Multi-Agent Reinforcement Learning for Market Making: Competition without Collusion [6.598173855286935]
We propose a hierarchical multi-agent reinforcement learning framework to study algorithmic collusion in market making.<n>The framework includes a self-interested market maker (AgentA), which is trained in an uncertain environment shaped by an adversary.<n>We show that adaptive incentive control supports more sustainable strategic co-existence in heterogeneous agent environments.
arXiv Detail & Related papers (2025-10-29T20:07:47Z) - Magentic Marketplace: An Open-Source Environment for Studying Agentic Markets [74.91125572848439]
We study two-sided agentic marketplaces where Assistant agents represent consumers and Service agents represent competing businesses.<n>This environment enables us to study key market dynamics: the utility agents achieve, behavioral biases, vulnerability to manipulation, and how search mechanisms shape market outcomes.<n>Our experiments show that frontier models can approach optimal welfare-- but only under ideal search conditions. Performance degrades sharply with scale, and all models exhibit severe first-proposal bias, creating 10-30x advantages for response speed over quality.
arXiv Detail & Related papers (2025-10-27T18:35:59Z) - Deviations from the Nash equilibrium and emergence of tacit collusion in a two-player optimal execution game with reinforcement learning [0.9208007322096533]
We study a scenario in which two autonomous agents learn to liquidate the same asset optimally in the presence of market impact.
Our results show that the strategies learned by the agents deviate significantly from the Nash equilibrium of the corresponding market impact game.
We explore how different levels of market volatility influence the agents' performance and the equilibria they discover.
arXiv Detail & Related papers (2024-08-21T16:54:53Z) - A Network Simulation of OTC Markets with Multiple Agents [3.8944986367855963]
We present a novel approach to simulating an over-the-counter (OTC) financial market in which trades are intermediated solely by market makers.
We show that our network-based model can lend insights into the effect of market-structure on price-action.
arXiv Detail & Related papers (2024-05-03T20:45:00Z) - MERMAIDE: Learning to Align Learners using Model-Based Meta-Learning [62.065503126104126]
We study how a principal can efficiently and effectively intervene on the rewards of a previously unseen learning agent in order to induce desirable outcomes.
This is relevant to many real-world settings like auctions or taxation, where the principal may not know the learning behavior nor the rewards of real people.
We introduce MERMAIDE, a model-based meta-learning framework to train a principal that can quickly adapt to out-of-distribution agents.
arXiv Detail & Related papers (2023-04-10T15:44:50Z) - Towards Multi-Agent Reinforcement Learning driven Over-The-Counter
Market Simulations [16.48389671789281]
We study a game between liquidity provider and liquidity taker agents interacting in an over-the-counter market.
By playing against each other, our deep-reinforcement-learning-driven agents learn emergent behaviors.
We show convergence rates for our multi-agent policy gradient algorithm under a transitivity assumption.
arXiv Detail & Related papers (2022-10-13T17:06:08Z) - A simple learning agent interacting with an agent-based market model [0.0]
We consider the learning dynamics of a single reinforcement learning optimal execution trading agent when it interacts with an agent-based financial market model.
We find that the moments of the model are robust to the impact of the learning agents except for the Hurst exponent.
The introduction of the learning agent preserves the shape of the price impact curves but can reduce the trade-sign auto-correlations when their trading volumes increase.
arXiv Detail & Related papers (2022-08-22T16:42:06Z) - Conditional Imitation Learning for Multi-Agent Games [89.897635970366]
We study the problem of conditional multi-agent imitation learning, where we have access to joint trajectory demonstrations at training time.
We propose a novel approach to address the difficulties of scalability and data scarcity.
Our model learns a low-rank subspace over ego and partner agent strategies, then infers and adapts to a new partner strategy by interpolating in the subspace.
arXiv Detail & Related papers (2022-01-05T04:40:13Z) - Finding General Equilibria in Many-Agent Economic Simulations Using Deep
Reinforcement Learning [72.23843557783533]
We show that deep reinforcement learning can discover stable solutions that are epsilon-Nash equilibria for a meta-game over agent types.
Our approach is more flexible and does not need unrealistic assumptions, e.g., market clearing.
We demonstrate our approach in real-business-cycle models, a representative family of DGE models, with 100 worker-consumers, 10 firms, and a government who taxes and redistributes.
arXiv Detail & Related papers (2022-01-03T17:00:17Z) - Deep Q-Learning Market Makers in a Multi-Agent Simulated Stock Market [58.720142291102135]
This paper focuses precisely on the study of these markets makers strategies from an agent-based perspective.
We propose the application of Reinforcement Learning (RL) for the creation of intelligent market markers in simulated stock markets.
arXiv Detail & Related papers (2021-12-08T14:55:21Z) - Distributed Adaptive Learning Under Communication Constraints [54.22472738551687]
This work examines adaptive distributed learning strategies designed to operate under communication constraints.
We consider a network of agents that must solve an online optimization problem from continual observation of streaming data.
arXiv Detail & Related papers (2021-12-03T19:23:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.