Finding Core Members of Cooperative Games using Agent-Based Modeling
- URL: http://arxiv.org/abs/2009.00519v1
- Date: Sun, 30 Aug 2020 17:38:43 GMT
- Title: Finding Core Members of Cooperative Games using Agent-Based Modeling
- Authors: Daniele Vernon-Bido, Andrew J. Collins
- Abstract summary: Agent-based modeling (ABM) is a powerful paradigm to gain insight into social phenomena.
In this paper, a algorithm is developed that can be embedded into an ABM to allow the agents to find coalition.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Agent-based modeling (ABM) is a powerful paradigm to gain insight into social
phenomena. One area that ABM has rarely been applied is coalition formation.
Traditionally, coalition formation is modeled using cooperative game theory. In
this paper, a heuristic algorithm is developed that can be embedded into an ABM
to allow the agents to find coalition. The resultant coalition structures are
comparable to those found by cooperative game theory solution approaches,
specifically, the core. A heuristic approach is required due to the
computational complexity of finding a cooperative game theory solution which
limits its application to about only a score of agents. The ABM paradigm
provides a platform in which simple rules and interactions between agents can
produce a macro-level effect without the large computational requirements. As
such, it can be an effective means for approximating cooperative game solutions
for large numbers of agents. Our heuristic algorithm combines agent-based
modeling and cooperative game theory to help find agent partitions that are
members of a games' core solution. The accuracy of our heuristic algorithm can
be determined by comparing its outcomes to the actual core solutions. This
comparison achieved by developing an experiment that uses a specific example of
a cooperative game called the glove game. The glove game is a type of exchange
economy game. Finding the traditional cooperative game theory solutions is
computationally intensive for large numbers of players because each possible
partition must be compared to each possible coalition to determine the core
set; hence our experiment only considers games of up to nine players. The
results indicate that our heuristic approach achieves a core solution over 90%
of the time for the games considered in our experiment.
Related papers
- Runtime analysis of a coevolutionary algorithm on impartial combinatorial games [1.104960878651584]
Coevolutionary algorithms (CoEAs) evolve a population of individuals by iteratively selecting the strongest based on their interactions against contemporaries, and using those selected as parents for the following generation.
However, the successful application of CoEAs for game playing is difficult due to pathological behaviours such as cycling, an issue especially critical for games with in runtime payoff landscapes.
In this paper, we push the scope of analysis to discover an optimal strategy for an impartial game.
This result applies to any impartial game, and for many games the implied bound is or quasipolynomial as a function of the number of
arXiv Detail & Related papers (2024-09-06T10:39:17Z) - Imperfect-Recall Games: Equilibrium Concepts and Their Complexity [74.01381499760288]
We investigate optimal decision making under imperfect recall, that is, when an agent forgets information it once held before.
In the framework of extensive-form games with imperfect recall, we analyze the computational complexities of finding equilibria in multiplayer settings.
arXiv Detail & Related papers (2024-06-23T00:27:28Z) - Open Ad Hoc Teamwork with Cooperative Game Theory [28.605478081031215]
Ad hoc teamwork poses a challenging problem, requiring the design of an agent to collaborate with teammates without prior coordination or joint training.
One promising solution is leveraging the generalizability of graph neural networks to handle an unrestricted number of agents.
We propose a novel algorithm named CIAO, based on the game's framework, with additional provable implementation tricks that can facilitate learning.
arXiv Detail & Related papers (2024-02-23T11:04:33Z) - Neural Population Learning beyond Symmetric Zero-sum Games [52.20454809055356]
We introduce NeuPL-JPSRO, a neural population learning algorithm that benefits from transfer learning of skills and converges to a Coarse Correlated (CCE) of the game.
Our work shows that equilibrium convergent population learning can be implemented at scale and in generality.
arXiv Detail & Related papers (2024-01-10T12:56:24Z) - Tackling Cooperative Incompatibility for Zero-Shot Human-AI Coordination [36.33334853998621]
We introduce the Cooperative Open-ended LEarning (COLE) framework to solve cooperative incompatibility in learning.
COLE formulates open-ended objectives in cooperative games with two players using perspectives of graph theory to evaluate and pinpoint the cooperative capacity of each strategy.
We show that COLE could effectively overcome the cooperative incompatibility from theoretical and empirical analysis.
arXiv Detail & Related papers (2023-06-05T16:51:38Z) - Hardness of Independent Learning and Sparse Equilibrium Computation in
Markov Games [70.19141208203227]
We consider the problem of decentralized multi-agent reinforcement learning in Markov games.
We show that no algorithm attains no-regret in general-sum games when executed independently by all players.
We show that our lower bounds hold even for seemingly easier setting in which all agents are controlled by a centralized algorithm.
arXiv Detail & Related papers (2023-03-22T03:28:12Z) - Cooperative Open-ended Learning Framework for Zero-shot Coordination [35.330951448600594]
We propose a framework to construct open-ended objectives in cooperative games with two players.
We also propose a practical algorithm that leverages knowledge from game theory and graph theory.
Our method outperforms current state-of-the-art methods when coordinating with different-level partners.
arXiv Detail & Related papers (2023-02-09T18:37:04Z) - Multi-Agent Collaboration via Reward Attribution Decomposition [75.36911959491228]
We propose Collaborative Q-learning (CollaQ) that achieves state-of-the-art performance in the StarCraft multi-agent challenge.
CollaQ is evaluated on various StarCraft Attribution maps and shows that it outperforms existing state-of-the-art techniques.
arXiv Detail & Related papers (2020-10-16T17:42:11Z) - Faster Algorithms for Optimal Ex-Ante Coordinated Collusive Strategies
in Extensive-Form Zero-Sum Games [123.76716667704625]
We focus on the problem of finding an optimal strategy for a team of two players that faces an opponent in an imperfect-information zero-sum extensive-form game.
In that setting, it is known that the best the team can do is sample a profile of potentially randomized strategies (one per player) from a joint (a.k.a. correlated) probability distribution at the beginning of the game.
We provide an algorithm that computes such an optimal distribution by only using profiles where only one of the team members gets to randomize in each profile.
arXiv Detail & Related papers (2020-09-21T17:51:57Z) - Evaluating and Rewarding Teamwork Using Cooperative Game Abstractions [103.3630903577951]
We use cooperative game theory to study teams of artificial RL agents as well as real world teams from professional sports.
We introduce a parametric model called cooperative game abstractions (CGAs) for estimating CFs from data.
We provide identification results and sample bounds complexity for CGA models as well as error bounds in the estimation of the Shapley Value using CGAs.
arXiv Detail & Related papers (2020-06-16T22:03:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.