Emergent cooperation through mutual information maximization
- URL: http://arxiv.org/abs/2006.11769v1
- Date: Sun, 21 Jun 2020 11:15:55 GMT
- Title: Emergent cooperation through mutual information maximization
- Authors: Santiago Cuervo and Marco Alzate
- Abstract summary: We propose a decentralized deep reinforcement learning algorithm for the design of cooperative multi-agent systems.
The algorithm is based on the hypothesis that highly correlated actions are a feature of a cooperative system.
We conclude that the interact of mutual information among agents promotes the emergence of cooperation in social dilemmas.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With artificial intelligence systems becoming ubiquitous in our society, its
designers will soon have to start to consider its social dimension, as many of
these systems will have to interact among them to work efficiently. With this
in mind, we propose a decentralized deep reinforcement learning algorithm for
the design of cooperative multi-agent systems. The algorithm is based on the
hypothesis that highly correlated actions are a feature of cooperative systems,
and hence, we propose the insertion of an auxiliary objective of maximization
of the mutual information between the actions of agents in the learning
problem. Our system is applied to a social dilemma, a problem whose optimal
solution requires that agents cooperate to maximize a macroscopic performance
function despite the divergent individual objectives of each agent. By
comparing the performance of the proposed system to a system without the
auxiliary objective, we conclude that the maximization of mutual information
among agents promotes the emergence of cooperation in social dilemmas.
Related papers
- Improving Cooperation in Collaborative Embodied AI [31.991962631895657]
The integration of Large Language Models into multiagent systems has opened new possibilities for collaborative reasoning and cooperation with AI agents.<n>This paper explores different prompting methods and evaluates their effectiveness in enhancing agent collaborative behaviour and decision-making.<n>We extend our research by integrating speech capabilities, enabling seamless collaborative voice-based interactions.
arXiv Detail & Related papers (2025-10-03T16:25:48Z) - Collaborative Gym: A Framework for Enabling and Evaluating Human-Agent Collaboration [51.452664740963066]
Collaborative Gym is a framework enabling asynchronous, tripartite interaction among agents, humans, and task environments.
We instantiate Co-Gym with three representative tasks in both simulated and real-world conditions.
Our findings reveal that collaborative agents consistently outperform their fully autonomous counterparts in task performance.
arXiv Detail & Related papers (2024-12-20T09:21:15Z) - Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Agent-Oriented Planning in Multi-Agent Systems [54.429028104022066]
We propose a novel framework for agent-oriented planning in multi-agent systems, leveraging a fast task decomposition and allocation process.
We integrate a feedback loop into the proposed framework to further enhance the effectiveness and robustness of such a problem-solving process.
arXiv Detail & Related papers (2024-10-03T04:07:51Z) - Interactive Speculative Planning: Enhance Agent Efficiency through Co-design of System and User Interface [38.76937539085164]
This paper presents a human-centered efficient agent planning method -- Interactive Speculative Planning.
We aim at enhancing the efficiency of agent planning through both system design and human-AI interaction.
arXiv Detail & Related papers (2024-09-30T16:52:51Z) - Scaling Large-Language-Model-based Multi-Agent Collaboration [75.5241464256688]
Pioneering advancements in large language model-powered agents have underscored the design pattern of multi-agent collaboration.
Inspired by the neural scaling law, this study investigates whether a similar principle applies to increasing agents in multi-agent collaboration.
arXiv Detail & Related papers (2024-06-11T11:02:04Z) - Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - Learning Reward Machines in Cooperative Multi-Agent Tasks [75.79805204646428]
This paper presents a novel approach to Multi-Agent Reinforcement Learning (MARL)
It combines cooperative task decomposition with the learning of reward machines (RMs) encoding the structure of the sub-tasks.
The proposed method helps deal with the non-Markovian nature of the rewards in partially observable environments.
arXiv Detail & Related papers (2023-03-24T15:12:28Z) - Iterated Reasoning with Mutual Information in Cooperative and Byzantine
Decentralized Teaming [0.0]
We show that reformulating an agent's policy to be conditional on the policies of its teammates inherently maximizes Mutual Information (MI) lower-bound when optimizing under Policy Gradient (PG)
Our approach, InfoPG, outperforms baselines in learning emergent collaborative behaviors and sets the state-of-the-art in decentralized cooperative MARL tasks.
arXiv Detail & Related papers (2022-01-20T22:54:32Z) - A Novel Multi-Agent System for Complex Scheduling Problems [2.294014185517203]
This paper is the conception and implementation of a multi-agent system that is applicable in various problem domains.
We simulate a NP-hard scheduling problem to demonstrate the validity of our approach.
This paper highlights the advantages of the agent-based approach, like the reduction in layout complexity, improved control of complicated systems, and extendability.
arXiv Detail & Related papers (2020-04-20T14:04:58Z) - Counterfactual Multi-Agent Reinforcement Learning with Graph Convolution
Communication [5.5438676149999075]
We consider a fully cooperative multi-agent system where agents cooperate to maximize a system's utility.
We propose that multi-agent systems must have the ability to communicate and understand the inter-plays between agents.
We develop an architecture that allows for communication among agents and tailors the system's reward for each individual agent.
arXiv Detail & Related papers (2020-04-01T14:36:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.