Competitive Multi-Operator Reinforcement Learning for Joint Pricing and Fleet Rebalancing in AMoD Systems
- URL: http://arxiv.org/abs/2603.05000v1
- Date: Thu, 05 Mar 2026 09:44:24 GMT
- Title: Competitive Multi-Operator Reinforcement Learning for Joint Pricing and Fleet Rebalancing in AMoD Systems
- Authors: Emil Kragh Toft, Carolin Schmidt, Daniele Gammelli, Filipe Rodrigues,
- Abstract summary: We investigate the impact of competition on policy learning by introducing a multi-operator reinforcement learning framework.<n>Experiments using real-world data from multiple cities demonstrate that competition fundamentally alters learned behaviors, leading to lower prices and distinct fleet positioning patterns.
- Score: 6.547090882667874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous Mobility-on-Demand (AMoD) systems promise to revolutionize urban transportation by providing affordable on-demand services to meet growing travel demand. However, realistic AMoD markets will be competitive, with multiple operators competing for passengers through strategic pricing and fleet deployment. While reinforcement learning has shown promise in optimizing single-operator AMoD control, existing work fails to capture competitive market dynamics. We investigate the impact of competition on policy learning by introducing a multi-operator reinforcement learning framework where two operators simultaneously learn pricing and fleet rebalancing policies. By integrating discrete choice theory, we enable passenger allocation and demand competition to emerge endogenously from utility-maximizing decisions. Experiments using real-world data from multiple cities demonstrate that competition fundamentally alters learned behaviors, leading to lower prices and distinct fleet positioning patterns compared to monopolistic settings. Notably, we demonstrate that learning-based approaches are robust to the additional stochasticity of competition, with competitive agents successfully converging to effective policies while accounting for partially unobserved competitor strategies.
Related papers
- ChargingBoul: A Competitive Negotiating Agent with Novel Opponent Modeling [0.0]
This paper presents ChargingBoul, a negotiating agent that competed in the 2022 Automated Negotiating Agents Competition (ANAC)<n> ChargingBoul employs a lightweight yet effective strategy that balances concession and opponent modeling to achieve high negotiation outcomes.<n>We evaluate ChargingBoul's performance using competition results and subsequent studies that have utilized the agent in negotiation research.
arXiv Detail & Related papers (2025-12-06T23:32:11Z) - The Bidding Games: Reinforcement Learning for MEV Extraction on Polygon Blockchain [0.11880231424287215]
We present a reinforcement learning framework for MEV extraction on Polygon Atlas.<n>Our work establishes that reinforcement learning provides a critical advantage in high-frequency MEV environments.
arXiv Detail & Related papers (2025-10-16T12:54:53Z) - The Hunger Game Debate: On the Emergence of Over-Competition in Multi-Agent Systems [90.96738882568224]
This paper investigates the over-competition in multi-agent debate, where agents under extreme pressure exhibit unreliable, harmful behaviors.<n>To study this phenomenon, we propose HATE, a novel experimental framework that simulates debates under a zero-sum competition arena.
arXiv Detail & Related papers (2025-09-30T11:44:47Z) - Order Acquisition Under Competitive Pressure: A Rapidly Adaptive Reinforcement Learning Approach for Ride-Hailing Subsidy Strategies [0.5717569761927883]
We propose Fast Competition Adaptation (FCA) and Reinforced Lagrangian Adjustment (RLA) to rapidly adapt to competitors' pricing adjustments.<n>Our approach integrates two key techniques: Fast Competition Adaptation (FCA), which enables swift responses to dynamic price changes, and Reinforced Lagrangian Adjustment (RLA), which ensures adherence to budget constraints.<n> Experimental results demonstrate that our proposed method consistently outperforms baseline approaches across diverse market conditions.
arXiv Detail & Related papers (2025-07-03T02:38:42Z) - Dynamic Pricing in High-Speed Railways Using Multi-Agent Reinforcement Learning [5.680630061642918]
This paper addresses the challenge of designing effective dynamic pricing strategies in the context of competing and cooperating operators.<n>A reinforcement learning framework based on a non-zero-sum Markov game is proposed, incorporating random utility models to capture passenger decision making.
arXiv Detail & Related papers (2025-01-14T16:19:25Z) - MetaTrading: An Immersion-Aware Model Trading Framework for Vehicular Metaverse Services [92.40586697273868]
Timely updating of Internet of Things data is crucial for achieving immersion in vehicular metaverse services.<n>We propose an immersion-aware model trading framework that enables efficient and privacy-preserving data provisioning through federated learning.<n> Experimental results show that the proposed framework outperforms state-of-the-art benchmarks.
arXiv Detail & Related papers (2024-10-25T16:20:46Z) - Strategic Collusion of LLM Agents: Market Division in Multi-Commodity Competitions [0.0]
Machine-learning technologies are seeing increased deployment in real-world market scenarios.<n>We explore the strategic behaviors of large language models (LLMs) when deployed as autonomous agents in multi-commodity markets.
arXiv Detail & Related papers (2024-09-19T20:10:40Z) - Defection-Free Collaboration between Competitors in a Learning System [61.22540496065961]
We study collaborative learning systems in which the participants are competitors who will defect from the system if they lose revenue by collaborating.
We propose a more equitable, *defection-free* scheme in which both firms share with each other while losing no revenue.
arXiv Detail & Related papers (2024-06-22T17:29:45Z) - CompeteSMoE -- Effective Training of Sparse Mixture of Experts via
Competition [52.2034494666179]
Sparse mixture of experts (SMoE) offers an appealing solution to scale up the model complexity beyond the mean of increasing the network's depth or width.
We propose a competition mechanism to address this fundamental challenge of representation collapse.
By routing inputs only to experts with the highest neural response, we show that, under mild assumptions, competition enjoys the same convergence rate as the optimal estimator.
arXiv Detail & Related papers (2024-02-04T15:17:09Z) - Learning to Sail Dynamic Networks: The MARLIN Reinforcement Learning
Framework for Congestion Control in Tactical Environments [53.08686495706487]
This paper proposes an RL framework that leverages an accurate and parallelizable emulation environment to reenact the conditions of a tactical network.
We evaluate our RL learning framework by training a MARLIN agent in conditions replicating a bottleneck link transition between a Satellite Communication (SATCOM) and an UHF Wide Band (UHF) radio link.
arXiv Detail & Related papers (2023-06-27T16:15:15Z) - A Cooperative-Competitive Multi-Agent Framework for Auto-bidding in
Online Advertising [53.636153252400945]
We propose a general Multi-Agent reinforcement learning framework for Auto-Bidding, namely MAAB, to learn the auto-bidding strategies.
Our approach outperforms several baseline methods in terms of social welfare and guarantees the ad platform's revenue.
arXiv Detail & Related papers (2021-06-11T08:07:14Z) - Flatland Competition 2020: MAPF and MARL for Efficient Train
Coordination on a Grid World [49.80905654161763]
The Flatland competition aimed at finding novel approaches to solve the vehicle re-scheduling problem (VRSP)
The VRSP is concerned with scheduling trips in traffic networks and the re-scheduling of vehicles when disruptions occur.
The ever-growing complexity of modern railway networks makes dynamic real-time scheduling of traffic virtually impossible.
arXiv Detail & Related papers (2021-03-30T17:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.