Learning-based Multi-agent Race Strategies in Formula 1
- URL: http://arxiv.org/abs/2602.23056v1
- Date: Thu, 26 Feb 2026 14:41:29 GMT
- Title: Learning-based Multi-agent Race Strategies in Formula 1
- Authors: Giona Fieni, Joschua Wüthrich, Marc-Philippe Neumann, Christopher H. Onder,
- Abstract summary: This paper proposes a reinforcement learning approach for multi-agent race strategy optimization.<n>Agents learn to balance energy management, tire degradation, aerodynamic interaction, and pit-stop decisions.<n>Results show that the agents adapt pit timing, tire selection, and energy allocation in response to opponents.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Formula 1, race strategies are adapted according to evolving race conditions and competitors' actions. This paper proposes a reinforcement learning approach for multi-agent race strategy optimization. Agents learn to balance energy management, tire degradation, aerodynamic interaction, and pit-stop decisions. Building on a pre-trained single-agent policy, we introduce an interaction module that accounts for the behavior of competitors. The combination of the interaction module and a self-play training scheme generates competitive policies, and agents are ranked based on their relative performance. Results show that the agents adapt pit timing, tire selection, and energy allocation in response to opponents, achieving robust and consistent race performance. Because the framework relies only on information available during real races, it can support race strategists' decisions before and during races.
Related papers
- Agile Flight Emerges from Multi-Agent Competitive Racing [7.9331622838838305]
We find that both agile flight and strategy emerge from agents trained with reinforcement learning.<n>We find that multi-agent competition yields policies that transfer more reliably to the real world than policies trained with a single-agent progress-based reward.
arXiv Detail & Related papers (2025-12-12T18:48:50Z) - Fair Play in the Fast Lane: Integrating Sportsmanship into Autonomous Racing Systems [44.52724799426566]
This paper introduces a bi-level game-theoretic framework to integrate sportsmanship (SPS) into versus racing.<n>At the high level, we model racing intentions using a Stackelberg game, where Monte Carlo Tree Search (MCTS) is employed to derive optimal strategies.<n>At the low level, vehicle interactions are formulated as a Generalized Nash Equilibrium Problem (GNEP), ensuring that all agents follow sportsmanship constraints while optimizing their trajectories.
arXiv Detail & Related papers (2025-03-04T10:14:19Z) - Explainable Reinforcement Learning for Formula One Race Strategy [8.158206540652179]
We introduce a reinforcement learning model, RSRL, to control race strategies in simulations.<n> RSRL achieves an average finishing position of P5.33 on our test race, the 2023 Bahrain Grand Prix.<n>We then demonstrate, in a generalisability study, how performance for one track or multiple tracks can be prioritised via training.
arXiv Detail & Related papers (2025-01-07T13:54:19Z) - Explainable Time Series Prediction of Tyre Energy in Formula One Race Strategy [2.6667819481058928]
Formula One (F1) race strategy takes place in a high-pressure and fast-paced environment.<n>Two of the core decisions of race strategy are when to make pit stops and which tyre compounds to select.<n>In this work, we trained deep learning models, using the Mercedes-AMG PETRONAS F1 team's historic race data.
arXiv Detail & Related papers (2025-01-07T12:38:48Z) - All by Myself: Learning Individualized Competitive Behaviour with a
Contrastive Reinforcement Learning optimization [57.615269148301515]
In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries' goals at the same time.
We propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them.
Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times.
arXiv Detail & Related papers (2023-10-02T08:11:07Z) - Mastering Nordschleife -- A comprehensive race simulation for AI
strategy decision-making in motorsports [0.0]
This paper develops a novel simulation model tailored to GT racing.
By integrating the simulation with OpenAI's Gym framework, a reinforcement learning environment is created and an agent is trained.
The paper contributes to the broader application of reinforcement learning in race simulations and unlocks the potential for race strategy optimization beyond FIA Formula1.
arXiv Detail & Related papers (2023-06-28T10:39:31Z) - Coach-assisted Multi-Agent Reinforcement Learning Framework for
Unexpected Crashed Agents [120.91291581594773]
We present a formal formulation of a cooperative multi-agent reinforcement learning system with unexpected crashes.
We propose a coach-assisted multi-agent reinforcement learning framework, which introduces a virtual coach agent to adjust the crash rate during training.
To the best of our knowledge, this work is the first to study the unexpected crashes in the multi-agent system.
arXiv Detail & Related papers (2022-03-16T08:22:45Z) - Who Leads and Who Follows in Strategic Classification? [82.44386576129295]
We argue that the order of play in strategic classification is fundamentally determined by the relative frequencies at which the decision-maker and the agents adapt to each other's actions.
We show that a decision-maker with the freedom to choose their update frequency can induce learning dynamics that converge to Stackelberg equilibria with either order of play.
arXiv Detail & Related papers (2021-06-23T16:48:46Z) - Moody Learners -- Explaining Competitive Behaviour of Reinforcement
Learning Agents [65.2200847818153]
In a competitive scenario, the agent does not only have a dynamic environment but also is directly affected by the opponents' actions.
Observing the Q-values of the agent is usually a way of explaining its behavior, however, do not show the temporal-relation between the selected actions.
arXiv Detail & Related papers (2020-07-30T11:30:42Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.