Towards True Lossless Sparse Communication in Multi-Agent Systems
- URL: http://arxiv.org/abs/2212.00115v1
- Date: Wed, 30 Nov 2022 20:43:34 GMT
- Title: Towards True Lossless Sparse Communication in Multi-Agent Systems
- Authors: Seth Karten, Mycal Tucker, Siva Kailas, Katia Sycara
- Abstract summary: Communication enables agents to cooperate to achieve their goals.
Recent work in learning sparse individualized communication suffers from high variance during training.
We use the information bottleneck to reframe sparsity as a representation learning problem.
- Score: 1.911678487931003
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Communication enables agents to cooperate to achieve their goals. Learning
when to communicate, i.e., sparse (in time) communication, and whom to message
is particularly important when bandwidth is limited. Recent work in learning
sparse individualized communication, however, suffers from high variance during
training, where decreasing communication comes at the cost of decreased reward,
particularly in cooperative tasks. We use the information bottleneck to reframe
sparsity as a representation learning problem, which we show naturally enables
lossless sparse communication at lower budgets than prior art. In this paper,
we propose a method for true lossless sparsity in communication via Information
Maximizing Gated Sparse Multi-Agent Communication (IMGS-MAC). Our model uses
two individualized regularization objectives, an information maximization
autoencoder and sparse communication loss, to create informative and sparse
communication. We evaluate the learned communication `language' through direct
causal analysis of messages in non-sparse runs to determine the range of
lossless sparse budgets, which allow zero-shot sparsity, and the range of
sparse budgets that will inquire a reward loss, which is minimized by our
learned gating function with few-shot sparsity. To demonstrate the efficacy of
our results, we experiment in cooperative multi-agent tasks where communication
is essential for success. We evaluate our model with both continuous and
discrete messages. We focus our analysis on a variety of ablations to show the
effect of message representations, including their properties, and lossless
performance of our model.
Related papers
- Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Learning Multi-Agent Communication with Contrastive Learning [3.816854668079928]
We introduce an alternative perspective where communicative messages are considered as different incomplete views of the environment state.
By examining the relationship between messages sent and received, we propose to learn to communicate using contrastive learning.
In communication-essential environments, our method outperforms previous work in both performance and learning speed.
arXiv Detail & Related papers (2023-07-03T23:51:05Z) - On the Role of Emergent Communication for Social Learning in Multi-Agent
Reinforcement Learning [0.0]
Social learning uses cues from experts to align heterogeneous policies, reduce sample complexity, and solve partially observable tasks.
This paper proposes an unsupervised method based on the information bottleneck to capture both referential complexity and task-specific utility.
arXiv Detail & Related papers (2023-02-28T03:23:27Z) - Emergent Quantized Communication [34.31732248872158]
We propose an alternative approach to achieve discrete communication -- quantization of communicated messages.
Message quantization allows us to train the model end-to-end, achieving superior performance in multiple setups.
arXiv Detail & Related papers (2022-11-04T12:39:45Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - Fundamental Limits of Communication Efficiency for Model Aggregation in
Distributed Learning: A Rate-Distortion Approach [54.311495894129585]
We study the limit of communication cost of model aggregation in distributed learning from a rate-distortion perspective.
It is found that the communication gain by exploiting the correlation between worker nodes is significant for SignSGD.
arXiv Detail & Related papers (2022-06-28T13:10:40Z) - Communication Efficient Distributed Learning for Kernelized Contextual
Bandits [58.78878127799718]
We tackle the communication efficiency challenge of learning kernelized contextual bandits in a distributed setting.
We consider non-linear reward mappings, by letting agents collaboratively search in a reproducing kernel Hilbert space.
We rigorously proved that our algorithm can attain sub-linear rate in both regret and communication cost.
arXiv Detail & Related papers (2022-06-10T01:39:15Z) - The Enforcers: Consistent Sparse-Discrete Methods for Constraining
Informative Emergent Communication [5.432350993419402]
Communication enables agents to cooperate to achieve their goals.
Recent work in learning sparse communication suffers from high variance training where, the price of decreasing communication is a decrease in reward, particularly in cooperative tasks.
This research addresses the above issues by limiting the loss in reward of decreasing communication and eliminating the penalty for discretization.
arXiv Detail & Related papers (2022-01-19T07:31:06Z) - Multi-agent Communication with Graph Information Bottleneck under
Limited Bandwidth (a position paper) [92.11330289225981]
In many real-world scenarios, communication can be expensive and the bandwidth of the multi-agent system is subject to certain constraints.
Redundant messages who occupy the communication resources can block the transmission of informative messages and thus jeopardize the performance.
We propose a novel multi-agent communication module, CommGIB, which effectively compresses the structure information and node information in the communication graph to deal with bandwidth-constrained settings.
arXiv Detail & Related papers (2021-12-20T07:53:44Z) - Minimizing Communication while Maximizing Performance in Multi-Agent
Reinforcement Learning [5.612141846711729]
Inter-agent communication can significantly increase performance in multi-agent tasks that require co-ordination.
In real-world applications, where communication may be limited by system constraints like bandwidth, power and network capacity, one might need to reduce the number of messages that are sent.
We show that we can reduce communication by 75% with no loss of performance.
arXiv Detail & Related papers (2021-06-15T23:13:51Z) - Learning to Communicate and Correct Pose Errors [75.03747122616605]
We study the setting proposed in V2VNet, where nearby self-driving vehicles jointly perform object detection and motion forecasting in a cooperative manner.
We propose a novel neural reasoning framework that learns to communicate, to estimate potential errors, and to reach a consensus about those errors.
arXiv Detail & Related papers (2020-11-10T18:19:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.