Game Theory with Simulation in the Presence of Unpredictable Randomisation
- URL: http://arxiv.org/abs/2410.14311v1
- Date: Fri, 18 Oct 2024 09:17:18 GMT
- Title: Game Theory with Simulation in the Presence of Unpredictable Randomisation
- Authors: Vojtech Kovarik, Nathaniel Sauerberg, Lewis Hammond, Vincent Conitzer,
- Abstract summary: We study the question in a game-theoretic setting where one agent can pay a fixed cost to simulate the other in order to learn its mixed strategy.
We prove that, in contrast to prior work on pure-strategy simulation, enabling mixed-strategy simulation may no longer lead to improved outcomes for both players.
We establish that mixed-strategy simulation can improve social welfare if the simulator has the option to scale their level of trust.
- Score: 22.216141581645115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI agents will be predictable in certain ways that traditional agents are not. Where and how can we leverage this predictability in order to improve social welfare? We study this question in a game-theoretic setting where one agent can pay a fixed cost to simulate the other in order to learn its mixed strategy. As a negative result, we prove that, in contrast to prior work on pure-strategy simulation, enabling mixed-strategy simulation may no longer lead to improved outcomes for both players in all so-called "generalised trust games". In fact, mixed-strategy simulation does not help in any game where the simulatee's action can depend on that of the simulator. We also show that, in general, deciding whether simulation introduces Pareto-improving Nash equilibria in a given game is NP-hard. As positive results, we establish that mixed-strategy simulation can improve social welfare if the simulator has the option to scale their level of trust, if the players face challenges with both trust and coordination, or if maintaining some level of privacy is essential for enabling cooperation.
Related papers
- YuLan-OneSim: Towards the Next Generation of Social Simulator with Large Language Models [50.86336063222539]
We introduce a novel social simulator called YuLan-OneSim.<n>Users can simply describe and refine their simulation scenarios through natural language interactions with our simulator.<n>We implement 50 default simulation scenarios spanning 8 domains, including economics, sociology, politics, psychology, organization, demographics, law, and communication.
arXiv Detail & Related papers (2025-05-12T14:05:17Z) - A Simulation System Towards Solving Societal-Scale Manipulation [14.799498804818333]
The rise of AI-driven manipulation poses significant risks to societal trust and democratic processes.
Yet, studying these effects in real-world settings at scale is ethically and logistically impractical.
We present a simulation environment designed to address this.
arXiv Detail & Related papers (2024-10-17T03:16:24Z) - GenSim: A General Social Simulation Platform with Large Language Model based Agents [111.00666003559324]
We propose a novel large language model (LLMs)-based simulation platform called textitGenSim.
Our platform supports one hundred thousand agents to better simulate large-scale populations in real-world contexts.
To our knowledge, GenSim represents an initial step toward a general, large-scale, and correctable social simulation platform.
arXiv Detail & Related papers (2024-10-06T05:02:23Z) - Recursive Joint Simulation in Games [31.83449293345303]
Game-theoretic dynamics between AI agents could differ from traditional human-human interactions.
One such difference is that it may be possible to accurately simulate an AI agent, for example because its source code is known.
We show that the resulting interaction is strategically equivalent to an infinitely repeated version of the original game.
arXiv Detail & Related papers (2024-02-12T23:53:46Z) - Neural Population Learning beyond Symmetric Zero-sum Games [52.20454809055356]
We introduce NeuPL-JPSRO, a neural population learning algorithm that benefits from transfer learning of skills and converges to a Coarse Correlated (CCE) of the game.
Our work shows that equilibrium convergent population learning can be implemented at scale and in generality.
arXiv Detail & Related papers (2024-01-10T12:56:24Z) - Sim-Anchored Learning for On-the-Fly Adaptation [45.123633153460034]
Fine-tuning simulation-trained RL agents with real-world data often degrades crucial behaviors due to limited or skewed data distributions.
We propose framing live-adaptation as a multi-objective optimization problem, where policy objectives must be satisfied both in simulation and reality.
arXiv Detail & Related papers (2023-01-17T16:16:53Z) - Finding mixed-strategy equilibria of continuous-action games without
gradients using randomized policy networks [83.28949556413717]
We study the problem of computing an approximate Nash equilibrium of continuous-action game without access to gradients.
We model players' strategies using artificial neural networks.
This paper is the first to solve general continuous-action games with unrestricted mixed strategies and without any gradient information.
arXiv Detail & Related papers (2022-11-29T05:16:41Z) - DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to
Reality [64.51295032956118]
We train a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand.
Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups.
arXiv Detail & Related papers (2022-10-25T01:51:36Z) - Provably Efficient Fictitious Play Policy Optimization for Zero-Sum
Markov Games with Structured Transitions [145.54544979467872]
We propose and analyze new fictitious play policy optimization algorithms for zero-sum Markov games with structured but unknown transitions.
We prove tight $widetildemathcalO(sqrtK)$ regret bounds after $K$ episodes in a two-agent competitive game scenario.
Our algorithms feature a combination of Upper Confidence Bound (UCB)-type optimism and fictitious play under the scope of simultaneous policy optimization.
arXiv Detail & Related papers (2022-07-25T18:29:16Z) - Deep Learning-based Spatially Explicit Emulation of an Agent-Based
Simulator for Pandemic in a City [0.6875312133832077]
Agent-Based Models are useful for simulation of physical or social processes, such as the spreading of a pandemic in a city.
Such models are computationally very expensive, and the complexity is often linear in the total number of agents.
In this paper, we discuss a Deep Learning model based on Dilated Convolutional Neural Network that can emulate such an agent based model with high accuracy.
arXiv Detail & Related papers (2022-05-28T10:56:37Z) - On the Verge of Solving Rocket League using Deep Reinforcement Learning
and Sim-to-sim Transfer [42.87143421242222]
This work explores a third way that is established in robotics, namely sim-to-real transfer.
In the case of Rocket League, we demonstrate that single behaviors of goalies and strikers can be successfully learned using Deep Reinforcement Learning.
arXiv Detail & Related papers (2022-05-10T17:37:19Z) - TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors [74.67698916175614]
We propose TrafficSim, a multi-agent behavior model for realistic traffic simulation.
In particular, we leverage an implicit latent variable model to parameterize a joint actor policy.
We show TrafficSim generates significantly more realistic and diverse traffic scenarios as compared to a diverse set of baselines.
arXiv Detail & Related papers (2021-01-17T00:29:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.