Evolution of Coordination in Pairwise and Multi-player Interactions via
Prior Commitments
- URL: http://arxiv.org/abs/2009.11727v2
- Date: Thu, 17 Dec 2020 12:21:52 GMT
- Title: Evolution of Coordination in Pairwise and Multi-player Interactions via
Prior Commitments
- Authors: Ogbo Ndidi Bianca, Aiman Elgarig, The Anh Han
- Abstract summary: We show that prior commitments can be a viable evolutionary mechanism for enhancing coordination.
In multiparty interactions, prior commitments prove to be crucial when a high level of group diversity is required.
Our analysis provides new insights into the complexity and beauty of behavioral evolution driven by humans' capacity for commitment.
- Score: 0.8701566919381222
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Upon starting a collective endeavour, it is important to understand your
partners' preferences and how strongly they commit to a common goal.
Establishing a prior commitment or agreement in terms of posterior benefits and
consequences from those engaging in it provides an important mechanism for
securing cooperation. Resorting to methods from Evolutionary Game Theory (EGT),
here we analyse how prior commitments can also be adopted as a tool for
enhancing coordination when its outcomes exhibit an asymmetric payoff
structure, in both pairwise and multiparty interactions. Arguably, coordination
is more complex to achieve than cooperation since there might be several
desirable collective outcomes in a coordination problem (compared to mutual
cooperation, the only desirable collective outcome in cooperation dilemmas).
Our analysis, both analytically and via numerical simulations, shows that
whether prior commitment would be a viable evolutionary mechanism for enhancing
coordination and the overall population social welfare strongly depends on the
collective benefit and severity of competition, and more importantly, how
asymmetric benefits are resolved in a commitment deal. Moreover, in multiparty
interactions, prior commitments prove to be crucial when a high level of group
diversity is required for optimal coordination. The results are robust for
different selection intensities. Overall, our analysis provides new insights
into the complexity and beauty of behavioral evolution driven by humans'
capacity for commitment, as well as for the design of self-organised and
distributed multi-agent systems for ensuring coordination among autonomous
agents.
Related papers
- CaPo: Cooperative Plan Optimization for Efficient Embodied Multi-Agent Cooperation [98.11670473661587]
CaPo improves cooperation efficiency with two phases: 1) meta-plan generation, and 2) progress-adaptive meta-plan and execution.
Experimental results on the ThreeDworld Multi-Agent Transport and Communicative Watch-And-Help tasks demonstrate that CaPo achieves much higher task completion rate and efficiency compared with state-of-the-arts.
arXiv Detail & Related papers (2024-11-07T13:08:04Z) - Learning to Balance Altruism and Self-interest Based on Empathy in Mixed-Motive Games [47.8980880888222]
Multi-agent scenarios often involve mixed motives, demanding altruistic agents capable of self-protection against potential exploitation.
We propose LASE Learning to balance Altruism and Self-interest based on Empathy.
LASE allocates a portion of its rewards to co-players as gifts, with this allocation adapting dynamically based on the social relationship.
arXiv Detail & Related papers (2024-10-10T12:30:56Z) - Cognitive Insights and Stable Coalition Matching for Fostering Multi-Agent Cooperation [6.536780912510439]
We propose a novel matching coalition mechanism that leverages the strengths of agents with different ToM levels.
Our work demonstrates the potential of leveraging ToM to create more sophisticated and human-like coordination strategies.
arXiv Detail & Related papers (2024-05-28T10:59:33Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - CoMIX: A Multi-agent Reinforcement Learning Training Architecture for Efficient Decentralized Coordination and Independent Decision-Making [2.4555276449137042]
Robust coordination skills enable agents to operate cohesively in shared environments, together towards a common goal and, ideally, individually without hindering each other's progress.
This paper presents Coordinated QMIX, a novel training framework for decentralized agents that enables emergent coordination through flexible policies, allowing at the same time independent decision-making at individual level.
arXiv Detail & Related papers (2023-08-21T13:45:44Z) - Rethinking Trajectory Prediction via "Team Game" [118.59480535826094]
We present a novel formulation for multi-agent trajectory prediction, which explicitly introduces the concept of interactive group consensus.
On two multi-agent settings, i.e. team sports and pedestrians, the proposed framework consistently achieves superior performance compared to existing methods.
arXiv Detail & Related papers (2022-10-17T07:16:44Z) - Iterated Reasoning with Mutual Information in Cooperative and Byzantine
Decentralized Teaming [0.0]
We show that reformulating an agent's policy to be conditional on the policies of its teammates inherently maximizes Mutual Information (MI) lower-bound when optimizing under Policy Gradient (PG)
Our approach, InfoPG, outperforms baselines in learning emergent collaborative behaviors and sets the state-of-the-art in decentralized cooperative MARL tasks.
arXiv Detail & Related papers (2022-01-20T22:54:32Z) - Normative Disagreement as a Challenge for Cooperative AI [56.34005280792013]
We argue that typical cooperation-inducing learning algorithms fail to cooperate in bargaining problems.
We develop a class of norm-adaptive policies and show in experiments that these significantly increase cooperation.
arXiv Detail & Related papers (2021-11-27T11:37:42Z) - Improved cooperation by balancing exploration and exploitation in
intertemporal social dilemma tasks [2.541277269153809]
We propose a new learning strategy for achieving coordination by incorporating a learning rate that can balance exploration and exploitation.
We show that agents that use the simple strategy improve a relatively collective return in a decision task called the intertemporal social dilemma.
We also explore the effects of the diversity of learning rates on the population of reinforcement learning agents and show that agents trained in heterogeneous populations develop particularly coordinated policies.
arXiv Detail & Related papers (2021-10-19T08:40:56Z) - Structured Diversification Emergence via Reinforced Organization Control
and Hierarchical Consensus Learning [48.525944995851965]
We propose a structured diversification emergence MARL framework named scRochico based on reinforced organization control and hierarchical consensus learning.
scRochico is significantly better than the current SOTA algorithms in terms of exploration efficiency and cooperation strength.
arXiv Detail & Related papers (2021-02-09T11:46:12Z) - Efficient Querying for Cooperative Probabilistic Commitments [29.57444821831916]
Multiagent systems can use commitments as the core of a general coordination infrastructure.
We show how cooperative agents can efficiently find an (approximately) optimal commitment by querying about carefully-selected commitment choices.
arXiv Detail & Related papers (2020-12-14T00:47:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.