Promoting Resilience in Multi-Agent Reinforcement Learning via
Confusion-Based Communication
- URL: http://arxiv.org/abs/2111.06614v1
- Date: Fri, 12 Nov 2021 09:03:19 GMT
- Title: Promoting Resilience in Multi-Agent Reinforcement Learning via
Confusion-Based Communication
- Authors: Ofir Abu, Matthias Gerstgrasser, Jeffrey Rosenschein and Sarah Keren
- Abstract summary: We highlight the relationship between a group's ability to collaborate effectively and the group's resilience.
To promote resilience, we suggest facilitating collaboration via a novel confusion-based communication protocol.
We present empirical evaluation of our approach in a variety of MARL settings.
- Score: 5.367993194110255
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in multi-agent reinforcement learning (MARL) provide a
variety of tools that support the ability of agents to adapt to unexpected
changes in their environment, and to operate successfully given their
environment's dynamic nature (which may be intensified by the presence of other
agents). In this work, we highlight the relationship between a group's ability
to collaborate effectively and the group's resilience, which we measure as the
group's ability to adapt to perturbations in the environment. To promote
resilience, we suggest facilitating collaboration via a novel confusion-based
communication protocol according to which agents broadcast observations that
are misaligned with their previous experiences. We allow decisions regarding
the width and frequency of messages to be learned autonomously by agents, which
are incentivized to reduce confusion. We present empirical evaluation of our
approach in a variety of MARL settings.
Related papers
- From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning [62.54484062185869]
We introduce StepAgent, which utilizes step-wise reward to optimize the agent's reinforcement learning process.
We propose implicit-reward and inverse reinforcement learning techniques to facilitate agent reflection and policy adjustment.
arXiv Detail & Related papers (2024-11-06T10:35:11Z) - AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z) - Situation-Dependent Causal Influence-Based Cooperative Multi-agent
Reinforcement Learning [18.054709749075194]
We propose a novel MARL algorithm named Situation-Dependent Causal Influence-Based Cooperative Multi-agent Reinforcement Learning (SCIC)
Our approach aims to detect inter-agent causal influences in specific situations based on the criterion using causal intervention and conditional mutual information.
The resulting update links coordinated exploration and intrinsic reward distribution, which enhance overall collaboration and performance.
arXiv Detail & Related papers (2023-12-15T05:09:32Z) - Quantifying Agent Interaction in Multi-agent Reinforcement Learning for
Cost-efficient Generalization [63.554226552130054]
Generalization poses a significant challenge in Multi-agent Reinforcement Learning (MARL)
The extent to which an agent is influenced by unseen co-players depends on the agent's policy and the specific scenario.
We present the Level of Influence (LoI), a metric quantifying the interaction intensity among agents within a given scenario and environment.
arXiv Detail & Related papers (2023-10-11T06:09:26Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - AgentVerse: Facilitating Multi-Agent Collaboration and Exploring
Emergent Behaviors [93.38830440346783]
We propose a multi-agent framework framework that can collaboratively adjust its composition as a greater-than-the-sum-of-its-parts system.
Our experiments demonstrate that framework framework can effectively deploy multi-agent groups that outperform a single agent.
In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups.
arXiv Detail & Related papers (2023-08-21T16:47:11Z) - Centralized Training with Hybrid Execution in Multi-Agent Reinforcement
Learning [7.163485179361718]
We introduce hybrid execution in multi-agent reinforcement learning (MARL)
MARL is a new paradigm in which agents aim to successfully complete cooperative tasks with arbitrary communication levels at execution time.
We contribute MARO, an approach that makes use of an auto-regressive predictive model, trained in a centralized manner, to estimate missing agents' observations.
arXiv Detail & Related papers (2022-10-12T14:58:32Z) - Coordinating Policies Among Multiple Agents via an Intelligent
Communication Channel [81.39444892747512]
In Multi-Agent Reinforcement Learning (MARL), specialized channels are often introduced that allow agents to communicate directly with one another.
We propose an alternative approach whereby agents communicate through an intelligent facilitator that learns to sift through and interpret signals provided by all agents to improve the agents' collective performance.
arXiv Detail & Related papers (2022-05-21T14:11:33Z) - Reliably Re-Acting to Partner's Actions with the Social Intrinsic
Motivation of Transfer Empowerment [40.24079015603578]
We consider multi-agent reinforcement learning (MARL) for cooperative communication and coordination tasks.
MARL agents can be brittle because they can overfit their training partners' policies.
Our objective is to bias the learning process towards finding reactive strategies towards other agents' behaviors.
arXiv Detail & Related papers (2022-03-07T13:03:35Z) - Depthwise Convolution for Multi-Agent Communication with Enhanced
Mean-Field Approximation [9.854975702211165]
We propose a new method based on local communication learning to tackle the multi-agent RL (MARL) challenge.
First, we design a new communication protocol that exploits the ability of depthwise convolution to efficiently extract local relations.
Second, we introduce the mean-field approximation into our method to reduce the scale of agent interactions.
arXiv Detail & Related papers (2022-03-06T07:42:43Z) - Provably Efficient Cooperative Multi-Agent Reinforcement Learning with
Function Approximation [15.411902255359074]
We show that it is possible to achieve near-optimal no-regret learning even with a fixed constant communication budget.
Our work generalizes several ideas from the multi-agent contextual and multi-armed bandit literature to MDPs and reinforcement learning.
arXiv Detail & Related papers (2021-03-08T18:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.