Towards Human-Agent Communication via the Information Bottleneck
Principle
- URL: http://arxiv.org/abs/2207.00088v1
- Date: Thu, 30 Jun 2022 20:10:20 GMT
- Title: Towards Human-Agent Communication via the Information Bottleneck
Principle
- Authors: Mycal Tucker, Julie Shah, Roger Levy, and Noga Zaslavsky
- Abstract summary: We study how trading off these three factors -- utility, informativeness, and complexity -- shapes emergent communication.
We propose Vector-Quantized Variational Information Bottleneck (VQ-VIB), a method for training neural agents to compress inputs into discrete signals embedded in a continuous space.
- Score: 19.121541894577298
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Emergent communication research often focuses on optimizing task-specific
utility as a driver for communication. However, human languages appear to
evolve under pressure to efficiently compress meanings into communication
signals by optimizing the Information Bottleneck tradeoff between
informativeness and complexity. In this work, we study how trading off these
three factors -- utility, informativeness, and complexity -- shapes emergent
communication, including compared to human communication. To this end, we
propose Vector-Quantized Variational Information Bottleneck (VQ-VIB), a method
for training neural agents to compress inputs into discrete signals embedded in
a continuous space. We train agents via VQ-VIB and compare their performance to
previously proposed neural architectures in grounded environments and in a
Lewis reference game. Across all neural architectures and settings, taking into
account communicative informativeness benefits communication convergence rates,
and penalizing communicative complexity leads to human-like lexicon sizes while
maintaining high utility. Additionally, we find that VQ-VIB outperforms other
discrete communication methods. This work demonstrates how fundamental
principles that are believed to characterize human language evolution may
inform emergent communication in artificial agents.
Related papers
- Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Hypergame Theory for Decentralized Resource Allocation in Multi-user Semantic Communications [60.63472821600567]
A novel framework for decentralized computing and communication resource allocation in multiuser SC systems is proposed.
The challenge of efficiently allocating communication and computing resources is addressed through the application of Stackelberg hyper game theory.
Simulation results show that the proposed Stackelberg hyper game results in efficient usage of communication and computing resources.
arXiv Detail & Related papers (2024-09-26T15:55:59Z) - Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Learning Multi-Agent Communication with Contrastive Learning [3.816854668079928]
We introduce an alternative perspective where communicative messages are considered as different incomplete views of the environment state.
By examining the relationship between messages sent and received, we propose to learn to communicate using contrastive learning.
In communication-essential environments, our method outperforms previous work in both performance and learning speed.
arXiv Detail & Related papers (2023-07-03T23:51:05Z) - On the Role of Emergent Communication for Social Learning in Multi-Agent
Reinforcement Learning [0.0]
Social learning uses cues from experts to align heterogeneous policies, reduce sample complexity, and solve partially observable tasks.
This paper proposes an unsupervised method based on the information bottleneck to capture both referential complexity and task-specific utility.
arXiv Detail & Related papers (2023-02-28T03:23:27Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - CommsVAE: Learning the brain's macroscale communication dynamics using
coupled sequential VAEs [0.0]
We propose a non-linear generative approach to communication from functional data.
We show that our approach models communication that is more specific to each task.
The specificity of our method means it can have an impact on the understanding of psychiatric disorders.
arXiv Detail & Related papers (2022-10-07T16:20:19Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - Learning to cooperate: Emergent communication in multi-agent navigation [49.11609702016523]
We show that agents performing a cooperative navigation task learn an interpretable communication protocol.
An analysis of the agents' policies reveals that emergent signals spatially cluster the state space.
Using populations of agents, we show that the emergent protocol has basic compositional structure.
arXiv Detail & Related papers (2020-04-02T16:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.