Mimicry and the Emergence of Cooperative Communication
- URL: http://arxiv.org/abs/2405.16622v1
- Date: Sun, 26 May 2024 16:34:52 GMT
- Title: Mimicry and the Emergence of Cooperative Communication
- Authors: Dylan Cope, Peter McBurney,
- Abstract summary: Communication between agents is a critical component of cooperative multi-agent systems.
We explore the effects of when agents can mimic preexisting, externally generated useful signals.
Our results show that both evolutionary optimisation and reinforcement learning may benefit from this intervention.
- Score: 0.6629765271909505
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In many situations, communication between agents is a critical component of cooperative multi-agent systems, however, it can be difficult to learn or evolve. In this paper, we investigate a simple way in which the emergence of communication may be facilitated. Namely, we explore the effects of when agents can mimic preexisting, externally generated useful signals. The key idea here is that these signals incentivise listeners to develop positive responses, that can then also be invoked by speakers mimicking those signals. This investigation starts with formalising this problem, and demonstrating that this form of mimicry changes optimisation dynamics and may provide the opportunity to escape non-communicative local optima. We then explore the problem empirically with a simulation in which spatially situated agents must communicate to collect resources. Our results show that both evolutionary optimisation and reinforcement learning may benefit from this intervention.
Related papers
- Learning Communication Policies for Different Follower Behaviors in a
Collaborative Reference Game [22.28337771947361]
We evaluate the adaptability of neural artificial agents towards assumed partner behaviors in a collaborative reference game.
Our results indicate that this novel ingredient leads to communicative strategies that are less verbose.
arXiv Detail & Related papers (2024-02-07T13:22:17Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - The Frost Hollow Experiments: Pavlovian Signalling as a Path to
Coordination and Communication Between Agents [7.980685978549764]
This paper contributes a multi-faceted study into what we term Pavlovian signalling.
We establish Pavlovian signalling as a natural bridge between fixed signalling paradigms and fully adaptive communication learning.
Our results point to an actionable, constructivist path towards continual communication learning between reinforcement learning agents.
arXiv Detail & Related papers (2022-03-17T17:49:45Z) - Promoting Resilience in Multi-Agent Reinforcement Learning via
Confusion-Based Communication [5.367993194110255]
We highlight the relationship between a group's ability to collaborate effectively and the group's resilience.
To promote resilience, we suggest facilitating collaboration via a novel confusion-based communication protocol.
We present empirical evaluation of our approach in a variety of MARL settings.
arXiv Detail & Related papers (2021-11-12T09:03:19Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - Learning Proxemic Behavior Using Reinforcement Learning with Cognitive
Agents [1.0635883951034306]
Proxemics is a branch of non-verbal communication concerned with studying the spatial behavior of people and animals.
We study how agents behave in environments based on proxemic behavior.
arXiv Detail & Related papers (2021-08-08T20:45:34Z) - Adversarial Attacks On Multi-Agent Communication [80.4392160849506]
Modern autonomous systems will soon be deployed at scale, opening up the possibility for cooperative multi-agent systems.
Such advantages rely heavily on communication channels which have been shown to be vulnerable to security breaches.
In this paper, we explore such adversarial attacks in a novel multi-agent setting where agents communicate by sharing learned intermediate representations.
arXiv Detail & Related papers (2021-01-17T00:35:26Z) - Gaussian Process Based Message Filtering for Robust Multi-Agent
Cooperation in the Presence of Adversarial Communication [5.161531917413708]
We consider the problem of providing robustness to adversarial communication in multi-agent systems.
We propose a communication architecture based on Graph Neural Networks (GNNs)
We show that our filtering method is able to reduce the impact that non-cooperative agents cause.
arXiv Detail & Related papers (2020-12-01T14:21:58Z) - Learning to Communicate and Correct Pose Errors [75.03747122616605]
We study the setting proposed in V2VNet, where nearby self-driving vehicles jointly perform object detection and motion forecasting in a cooperative manner.
We propose a novel neural reasoning framework that learns to communicate, to estimate potential errors, and to reach a consensus about those errors.
arXiv Detail & Related papers (2020-11-10T18:19:40Z) - Exploring Zero-Shot Emergent Communication in Embodied Multi-Agent
Populations [59.608216900601384]
We study agents that learn to communicate via actuating their joints in a 3D environment.
We show that under realistic assumptions, a non-uniform distribution of intents and a common-knowledge energy cost, these agents can find protocols that generalize to novel partners.
arXiv Detail & Related papers (2020-10-29T19:23:10Z) - Learning to cooperate: Emergent communication in multi-agent navigation [49.11609702016523]
We show that agents performing a cooperative navigation task learn an interpretable communication protocol.
An analysis of the agents' policies reveals that emergent signals spatially cluster the state space.
Using populations of agents, we show that the emergent protocol has basic compositional structure.
arXiv Detail & Related papers (2020-04-02T16:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.