A Marriage between Adversarial Team Games and 2-player Games: Enabling
Abstractions, No-regret Learning, and Subgame Solving
- URL: http://arxiv.org/abs/2206.09161v1
- Date: Sat, 18 Jun 2022 10:02:08 GMT
- Title: A Marriage between Adversarial Team Games and 2-player Games: Enabling
Abstractions, No-regret Learning, and Subgame Solving
- Authors: Luca Carminati, Federico Cacciamani, Marco Ciccone, Nicola Gatti
- Abstract summary: emphEx ante correlation is becoming the mainstream approach for emphsequential adversarial team games, where a team of players faces another team in a zero-sum game.
This work shows that we can recover from this weakness by bridging the gap between sequential adversarial team games and 2-player games.
We propose a new, suitable game representation that we call emphteam-public-information, in which a team is represented as a single coordinator who only knows information common to the whole team and prescribes to each member an action for any possible private state.
- Score: 31.29335755664997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: \emph{Ex ante} correlation is becoming the mainstream approach for
\emph{sequential adversarial team games}, where a team of players faces another
team in a zero-sum game. It is known that team members' asymmetric information
makes both equilibrium computation \textsf{APX}-hard and team's strategies not
directly representable on the game tree. This latter issue prevents the
adoption of successful tools for huge 2-player zero-sum games such as,
\emph{e.g.}, abstractions, no-regret learning, and subgame solving. This work
shows that we can recover from this weakness by bridging the gap between
sequential adversarial team games and 2-player games. In particular, we propose
a new, suitable game representation that we call
\emph{team-public-information}, in which a team is represented as a single
coordinator who only knows information common to the whole team and prescribes
to each member an action for any possible private state. The resulting
representation is highly \emph{explainable}, being a 2-player tree in which the
team's strategies are behavioral with a direct interpretation and more
expressive than the original extensive form when designing abstractions.
Furthermore, we prove payoff equivalence of our representation, and we provide
techniques that, starting directly from the extensive form, generate
dramatically more compact representations without information loss. Finally, we
experimentally evaluate our techniques when applied to a standard testbed,
comparing their performance with the current state of the art.
Related papers
- State-Constrained Zero-Sum Differential Games with One-Sided Information [19.964883571758502]
We study zero-sum differential games with state constraints and one-sided information.
Our contribution is an extension to games with state constraints and the derivation of the primal and dual subdynamic principles necessary for computing behavioral strategies.
arXiv Detail & Related papers (2024-03-05T07:51:38Z) - All by Myself: Learning Individualized Competitive Behaviour with a
Contrastive Reinforcement Learning optimization [57.615269148301515]
In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries' goals at the same time.
We propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them.
Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times.
arXiv Detail & Related papers (2023-10-02T08:11:07Z) - Individual and Collective Performance Deteriorate in a New Team: A Case
Study of CS:GO Tournaments [11.86905804972623]
This study aims to answer how changing a team affects individual and collective performance in eSports tournaments.
We collected data from professional tournaments of a popular first-person shooter game.
After a player switched to a new team, both the individual and the collective performance dropped initially, and then slowly recovered.
arXiv Detail & Related papers (2022-05-19T16:54:49Z) - Collusion Detection in Team-Based Multiplayer Games [57.153233321515984]
We propose a system that detects colluding behaviors in team-based multiplayer games.
The proposed method analyzes the players' social relationships paired with their in-game behavioral patterns.
We then automate the detection using Isolation Forest, an unsupervised learning technique specialized in highlighting outliers.
arXiv Detail & Related papers (2022-03-10T02:37:39Z) - Public Information Representation for Adversarial Team Games [31.29335755664997]
adversarial team games reside in the asymmetric information available to the team members during the play.
Our algorithms convert a sequential team game with adversaries to a classical two-player zero-sum game.
Due to the NP-hard nature of the problem, the resulting Public Team game may be exponentially larger than the original one.
arXiv Detail & Related papers (2022-01-25T15:07:12Z) - Contextual Games: Multi-Agent Learning with Side Information [57.76996806603094]
We formulate the novel class of contextual games driven by contextual information at each round.
By means of kernel-based regularity assumptions, we model the correlation between different contexts and game outcomes.
We propose a novel online (meta) algorithm that exploits such correlations to minimize the contextual regret of individual players.
arXiv Detail & Related papers (2021-07-13T18:37:37Z) - Multi-Agent Coordination in Adversarial Environments through Signal
Mediated Strategies [37.00818384785628]
Team members can coordinate their strategies before the beginning of the game, but are unable to communicate during the playing phase of the game.
In this setting, model-free RL methods are oftentimes unable to capture coordination because agents' policies are executed in a decentralized fashion.
We show convergence to coordinated equilibria in cases where previous state-of-the-art multi-agent RL algorithms did not.
arXiv Detail & Related papers (2021-02-09T18:44:16Z) - Faster Algorithms for Optimal Ex-Ante Coordinated Collusive Strategies
in Extensive-Form Zero-Sum Games [123.76716667704625]
We focus on the problem of finding an optimal strategy for a team of two players that faces an opponent in an imperfect-information zero-sum extensive-form game.
In that setting, it is known that the best the team can do is sample a profile of potentially randomized strategies (one per player) from a joint (a.k.a. correlated) probability distribution at the beginning of the game.
We provide an algorithm that computes such an optimal distribution by only using profiles where only one of the team members gets to randomize in each profile.
arXiv Detail & Related papers (2020-09-21T17:51:57Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.